2026-03-09T13:39:26.778 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T13:39:26.828 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T13:39:26.851 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/495 branch: squid description: orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} email: null first_in_suite: false flavor: default job_id: '495' last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 ms bind msgr1: false ms bind msgr2: true ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - but it is still running - overall HEALTH_ - \(OSDMAP_FLAGS\) - \(PG_ - \(OSD_ - \(OBJECT_ - \(POOL_APP_NOT_ENABLED\) log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: cephadm-package install: ceph: extra_system_packages: deb: - python3-pytest rpm: - python3-pytest flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_packages: - cephadm extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM383Lm+mt8iez+bpT8XVmvDOucFbxH+E2ErfhGaWmWV7o0ppQ8aFCyVbn5DfHMv/E4yBASGOQjcje51HF0LkmU= vm04.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAyLpvm78kf6lBsM8OZdK27qExo1fTVUEJ7S+sx0cePuLoH1MjbjiRsQcXB0vWbzJYSw94z6LcISNEAG0qC6J6E= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install nvmetcli nvme-cli -y - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - workunit: clients: client.0: - rados/test_python.sh timeout: 1h teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T13:39:26.851 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T13:39:26.852 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T13:39:26.852 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T13:39:26.852 INFO:teuthology.task.internal:Checking packages... 2026-03-09T13:39:26.852 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T13:39:26.852 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T13:39:26.852 INFO:teuthology.packaging:ref: None 2026-03-09T13:39:26.852 INFO:teuthology.packaging:tag: None 2026-03-09T13:39:26.852 INFO:teuthology.packaging:branch: squid 2026-03-09T13:39:26.852 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:39:26.852 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-09T13:39:27.629 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-09T13:39:27.630 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T13:39:27.630 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T13:39:27.630 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T13:39:27.631 INFO:teuthology.task.internal:Saving configuration 2026-03-09T13:39:27.635 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T13:39:27.636 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T13:39:27.642 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/495', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 13:38:12.767469', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM383Lm+mt8iez+bpT8XVmvDOucFbxH+E2ErfhGaWmWV7o0ppQ8aFCyVbn5DfHMv/E4yBASGOQjcje51HF0LkmU='} 2026-03-09T13:39:27.646 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm04.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/495', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 13:38:12.765484', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:04', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAyLpvm78kf6lBsM8OZdK27qExo1fTVUEJ7S+sx0cePuLoH1MjbjiRsQcXB0vWbzJYSw94z6LcISNEAG0qC6J6E='} 2026-03-09T13:39:27.646 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T13:39:27.647 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-09T13:39:27.647 INFO:teuthology.task.internal:roles: ubuntu@vm04.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-09T13:39:27.647 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T13:39:27.653 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-03-09T13:39:27.657 DEBUG:teuthology.task.console_log:vm04 does not support IPMI; excluding 2026-03-09T13:39:27.657 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f57de172170>, signals=[15]) 2026-03-09T13:39:27.657 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T13:39:27.658 INFO:teuthology.task.internal:Opening connections... 2026-03-09T13:39:27.658 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-03-09T13:39:27.659 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T13:39:27.719 DEBUG:teuthology.task.internal:connecting to ubuntu@vm04.local 2026-03-09T13:39:27.720 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T13:39:27.780 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T13:39:27.781 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-03-09T13:39:27.827 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-03-09T13:39:27.828 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:NAME="CentOS Stream" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="9" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:ID="centos" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE="rhel fedora" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="9" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:PLATFORM_ID="platform:el9" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:ANSI_COLOR="0;31" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:LOGO="fedora-logo-icon" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://centos.org/" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-09T13:39:27.882 INFO:teuthology.orchestra.run.vm03.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-09T13:39:27.882 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-03-09T13:39:27.887 DEBUG:teuthology.orchestra.run.vm04:> uname -m 2026-03-09T13:39:27.902 INFO:teuthology.orchestra.run.vm04.stdout:x86_64 2026-03-09T13:39:27.902 DEBUG:teuthology.orchestra.run.vm04:> cat /etc/os-release 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:NAME="CentOS Stream" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:VERSION="9" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:ID="centos" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:ID_LIKE="rhel fedora" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_ID="9" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:PLATFORM_ID="platform:el9" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:ANSI_COLOR="0;31" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:LOGO="fedora-logo-icon" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:HOME_URL="https://centos.org/" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-09T13:39:27.956 INFO:teuthology.orchestra.run.vm04.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-09T13:39:27.956 INFO:teuthology.lock.ops:Updating vm04.local on lock server 2026-03-09T13:39:27.960 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T13:39:27.962 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T13:39:27.963 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T13:39:27.963 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-03-09T13:39:27.965 DEBUG:teuthology.orchestra.run.vm04:> test '!' -e /home/ubuntu/cephtest 2026-03-09T13:39:28.011 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T13:39:28.012 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T13:39:28.012 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-03-09T13:39:28.020 DEBUG:teuthology.orchestra.run.vm04:> test -z $(ls -A /var/lib/ceph) 2026-03-09T13:39:28.035 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T13:39:28.066 INFO:teuthology.orchestra.run.vm04.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T13:39:28.066 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T13:39:28.074 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-03-09T13:39:28.088 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:39:28.282 DEBUG:teuthology.orchestra.run.vm04:> test -e /ceph-qa-ready 2026-03-09T13:39:28.296 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:39:28.498 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T13:39:28.499 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T13:39:28.499 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T13:39:28.501 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T13:39:28.514 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T13:39:28.516 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T13:39:28.517 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T13:39:28.517 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T13:39:28.556 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T13:39:28.574 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T13:39:28.576 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T13:39:28.576 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T13:39:28.625 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:39:28.625 DEBUG:teuthology.orchestra.run.vm04:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T13:39:28.638 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:39:28.638 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T13:39:28.667 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T13:39:28.689 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T13:39:28.697 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T13:39:28.703 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T13:39:28.712 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T13:39:28.713 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T13:39:28.714 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T13:39:28.715 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T13:39:28.740 DEBUG:teuthology.orchestra.run.vm04:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T13:39:28.779 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T13:39:28.782 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T13:39:28.782 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T13:39:28.804 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T13:39:28.837 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T13:39:28.878 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T13:39:28.936 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:39:28.936 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T13:39:28.996 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T13:39:29.018 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T13:39:29.075 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:39:29.075 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T13:39:29.133 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-03-09T13:39:29.135 DEBUG:teuthology.orchestra.run.vm04:> sudo service rsyslog restart 2026-03-09T13:39:29.160 INFO:teuthology.orchestra.run.vm03.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T13:39:29.201 INFO:teuthology.orchestra.run.vm04.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T13:39:29.519 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T13:39:29.521 INFO:teuthology.task.internal:Starting timer... 2026-03-09T13:39:29.521 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T13:39:29.524 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T13:39:29.526 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-09T13:39:29.526 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-03-09T13:39:29.526 INFO:teuthology.task.selinux:Excluding vm04: VMs are not yet supported 2026-03-09T13:39:29.526 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T13:39:29.526 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T13:39:29.526 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T13:39:29.526 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T13:39:29.528 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T13:39:29.528 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T13:39:29.529 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T13:39:30.020 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T13:39:30.025 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T13:39:30.026 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventorynvmu55hs --limit vm03.local,vm04.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T13:41:18.862 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm03.local'), Remote(name='ubuntu@vm04.local')] 2026-03-09T13:41:18.862 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-03-09T13:41:18.863 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T13:41:18.928 DEBUG:teuthology.orchestra.run.vm03:> true 2026-03-09T13:41:19.014 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-03-09T13:41:19.014 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm04.local' 2026-03-09T13:41:19.014 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T13:41:19.079 DEBUG:teuthology.orchestra.run.vm04:> true 2026-03-09T13:41:19.160 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm04.local' 2026-03-09T13:41:19.160 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T13:41:19.163 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T13:41:19.163 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T13:41:19.163 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T13:41:19.165 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T13:41:19.165 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T13:41:19.205 INFO:teuthology.orchestra.run.vm03.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-09T13:41:19.222 INFO:teuthology.orchestra.run.vm03.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-09T13:41:19.250 INFO:teuthology.orchestra.run.vm04.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-09T13:41:19.260 INFO:teuthology.orchestra.run.vm03.stderr:sudo: ntpd: command not found 2026-03-09T13:41:19.266 INFO:teuthology.orchestra.run.vm04.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-09T13:41:19.276 INFO:teuthology.orchestra.run.vm03.stdout:506 Cannot talk to daemon 2026-03-09T13:41:19.294 INFO:teuthology.orchestra.run.vm03.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-09T13:41:19.299 INFO:teuthology.orchestra.run.vm04.stderr:sudo: ntpd: command not found 2026-03-09T13:41:19.312 INFO:teuthology.orchestra.run.vm04.stdout:506 Cannot talk to daemon 2026-03-09T13:41:19.312 INFO:teuthology.orchestra.run.vm03.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-09T13:41:19.333 INFO:teuthology.orchestra.run.vm04.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-09T13:41:19.349 INFO:teuthology.orchestra.run.vm04.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-09T13:41:19.365 INFO:teuthology.orchestra.run.vm03.stderr:bash: line 1: ntpq: command not found 2026-03-09T13:41:19.368 INFO:teuthology.orchestra.run.vm03.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T13:41:19.368 INFO:teuthology.orchestra.run.vm03.stdout:=============================================================================== 2026-03-09T13:41:19.411 INFO:teuthology.orchestra.run.vm04.stderr:bash: line 1: ntpq: command not found 2026-03-09T13:41:19.413 INFO:teuthology.orchestra.run.vm04.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T13:41:19.413 INFO:teuthology.orchestra.run.vm04.stdout:=============================================================================== 2026-03-09T13:41:19.413 INFO:teuthology.run_tasks:Running task pexec... 2026-03-09T13:41:19.416 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-09T13:41:19.417 DEBUG:teuthology.orchestra.run.vm03:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-09T13:41:19.417 DEBUG:teuthology.orchestra.run.vm04:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-09T13:41:19.418 DEBUG:teuthology.task.pexec:ubuntu@vm04.local< sudo dnf remove nvme-cli -y 2026-03-09T13:41:19.418 DEBUG:teuthology.task.pexec:ubuntu@vm04.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-09T13:41:19.418 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm04.local 2026-03-09T13:41:19.418 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-09T13:41:19.418 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-09T13:41:19.419 DEBUG:teuthology.task.pexec:ubuntu@vm03.local< sudo dnf remove nvme-cli -y 2026-03-09T13:41:19.419 DEBUG:teuthology.task.pexec:ubuntu@vm03.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-09T13:41:19.419 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm03.local 2026-03-09T13:41:19.419 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-09T13:41:19.419 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-09T13:41:19.668 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: nvme-cli 2026-03-09T13:41:19.668 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T13:41:19.671 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:41:19.672 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T13:41:19.672 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:41:19.679 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: nvme-cli 2026-03-09T13:41:19.680 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T13:41:19.683 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T13:41:19.684 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T13:41:19.684 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T13:41:20.181 INFO:teuthology.orchestra.run.vm04.stdout:Last metadata expiration check: 0:01:05 ago on Mon 09 Mar 2026 01:40:15 PM UTC. 2026-03-09T13:41:20.239 INFO:teuthology.orchestra.run.vm03.stdout:Last metadata expiration check: 0:01:01 ago on Mon 09 Mar 2026 01:40:19 PM UTC. 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout: Package Architecture Version Repository Size 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout:Installing: 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout:Installing dependencies: 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-09T13:41:20.299 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:41:20.300 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:41:20.300 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T13:41:20.300 INFO:teuthology.orchestra.run.vm04.stdout:Install 6 Packages 2026-03-09T13:41:20.300 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:41:20.300 INFO:teuthology.orchestra.run.vm04.stdout:Total download size: 2.3 M 2026-03-09T13:41:20.300 INFO:teuthology.orchestra.run.vm04.stdout:Installed size: 11 M 2026-03-09T13:41:20.300 INFO:teuthology.orchestra.run.vm04.stdout:Downloading Packages: 2026-03-09T13:41:20.373 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout: Package Architecture Version Repository Size 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout:Installing: 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout:Installing dependencies: 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout:Install 6 Packages 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout:Total download size: 2.3 M 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout:Installed size: 11 M 2026-03-09T13:41:20.374 INFO:teuthology.orchestra.run.vm03.stdout:Downloading Packages: 2026-03-09T13:41:21.016 INFO:teuthology.orchestra.run.vm03.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 251 kB/s | 44 kB 00:00 2026-03-09T13:41:21.033 INFO:teuthology.orchestra.run.vm04.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 162 kB/s | 44 kB 00:00 2026-03-09T13:41:21.041 INFO:teuthology.orchestra.run.vm04.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 258 kB/s | 72 kB 00:00 2026-03-09T13:41:21.049 INFO:teuthology.orchestra.run.vm03.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 346 kB/s | 72 kB 00:00 2026-03-09T13:41:21.092 INFO:teuthology.orchestra.run.vm03.stdout:(3/6): nvme-cli-2.16-1.el9.x86_64.rpm 4.6 MB/s | 1.2 MB 00:00 2026-03-09T13:41:21.096 INFO:teuthology.orchestra.run.vm03.stdout:(4/6): python3-kmod-0.9-32.el9.x86_64.rpm 1.0 MB/s | 84 kB 00:00 2026-03-09T13:41:21.114 INFO:teuthology.orchestra.run.vm03.stdout:(5/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 2.3 MB/s | 150 kB 00:00 2026-03-09T13:41:21.151 INFO:teuthology.orchestra.run.vm03.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 14 MB/s | 837 kB 00:00 2026-03-09T13:41:21.152 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-09T13:41:21.152 INFO:teuthology.orchestra.run.vm03.stdout:Total 3.0 MB/s | 2.3 MB 00:00 2026-03-09T13:41:21.224 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-09T13:41:21.231 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-09T13:41:21.231 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-09T13:41:21.295 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-09T13:41:21.295 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-09T13:41:21.306 INFO:teuthology.orchestra.run.vm04.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 308 kB/s | 84 kB 00:00 2026-03-09T13:41:21.373 INFO:teuthology.orchestra.run.vm04.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 453 kB/s | 150 kB 00:00 2026-03-09T13:41:21.499 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-09T13:41:21.510 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-09T13:41:21.522 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-09T13:41:21.532 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-09T13:41:21.541 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-09T13:41:21.546 INFO:teuthology.orchestra.run.vm03.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-09T13:41:21.574 INFO:teuthology.orchestra.run.vm04.stdout:(5/6): nvme-cli-2.16-1.el9.x86_64.rpm 1.4 MB/s | 1.2 MB 00:00 2026-03-09T13:41:21.735 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-09T13:41:21.741 INFO:teuthology.orchestra.run.vm03.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-09T13:41:21.785 INFO:teuthology.orchestra.run.vm04.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 1.7 MB/s | 837 kB 00:00 2026-03-09T13:41:21.786 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-09T13:41:21.786 INFO:teuthology.orchestra.run.vm04.stdout:Total 1.6 MB/s | 2.3 MB 00:01 2026-03-09T13:41:21.867 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:41:21.876 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:41:21.876 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:41:21.935 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:41:21.936 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:41:22.123 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:41:22.137 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-09T13:41:22.151 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-09T13:41:22.161 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-09T13:41:22.169 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-09T13:41:22.170 INFO:teuthology.orchestra.run.vm04.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-09T13:41:22.189 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-09T13:41:22.189 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T13:41:22.189 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:41:22.372 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-09T13:41:22.380 INFO:teuthology.orchestra.run.vm04.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-09T13:41:22.778 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-09T13:41:22.778 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-09T13:41:22.778 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-09T13:41:22.778 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-09T13:41:22.778 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-09T13:41:22.803 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-09T13:41:22.803 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T13:41:22.803 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:41:22.907 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-09T13:41:22.907 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:41:22.908 INFO:teuthology.orchestra.run.vm03.stdout:Installed: 2026-03-09T13:41:22.908 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-09T13:41:22.908 INFO:teuthology.orchestra.run.vm03.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-09T13:41:22.908 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-09T13:41:22.908 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:41:22.908 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T13:41:22.989 DEBUG:teuthology.parallel:result is None 2026-03-09T13:41:23.381 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-09T13:41:23.382 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-09T13:41:23.382 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-09T13:41:23.382 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-09T13:41:23.382 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-09T13:41:23.489 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-09T13:41:23.489 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:41:23.489 INFO:teuthology.orchestra.run.vm04.stdout:Installed: 2026-03-09T13:41:23.489 INFO:teuthology.orchestra.run.vm04.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-09T13:41:23.489 INFO:teuthology.orchestra.run.vm04.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-09T13:41:23.489 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-09T13:41:23.489 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:41:23.489 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:41:23.567 DEBUG:teuthology.parallel:result is None 2026-03-09T13:41:23.567 INFO:teuthology.run_tasks:Running task install... 2026-03-09T13:41:23.570 DEBUG:teuthology.task.install:project ceph 2026-03-09T13:41:23.570 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'extra_system_packages': {'deb': ['python3-pytest'], 'rpm': ['python3-pytest']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_packages': ['cephadm'], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T13:41:23.570 DEBUG:teuthology.task.install:config {'extra_system_packages': {'deb': ['python3-pytest', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['python3-pytest', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T13:41:23.570 INFO:teuthology.task.install:Using flavor: default 2026-03-09T13:41:23.574 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T13:41:23.574 INFO:teuthology.task.install:extra packages: [] 2026-03-09T13:41:23.574 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-pytest', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['python3-pytest', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-09T13:41:23.574 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:41:23.575 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-pytest', 'python3-xmltodict', 'python3-jmespath'], 'rpm': ['python3-pytest', 'bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-09T13:41:23.575 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:41:24.167 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-09T13:41:24.167 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-09T13:41:24.225 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-09T13:41:24.225 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-09T13:41:24.745 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-09T13:41:24.745 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:41:24.745 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-09T13:41:24.753 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-09T13:41:24.753 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:41:24.753 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-09T13:41:24.783 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, python3-pytest, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-09T13:41:24.783 DEBUG:teuthology.orchestra.run.vm04:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-09T13:41:24.785 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, python3-pytest, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-09T13:41:24.785 DEBUG:teuthology.orchestra.run.vm03:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-09T13:41:24.856 DEBUG:teuthology.orchestra.run.vm04:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-09T13:41:24.866 DEBUG:teuthology.orchestra.run.vm03:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-09T13:41:24.938 DEBUG:teuthology.orchestra.run.vm04:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-09T13:41:24.951 DEBUG:teuthology.orchestra.run.vm03:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-09T13:41:24.987 INFO:teuthology.orchestra.run.vm03.stdout:check_obsoletes = 1 2026-03-09T13:41:24.989 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean all 2026-03-09T13:41:25.008 INFO:teuthology.orchestra.run.vm04.stdout:check_obsoletes = 1 2026-03-09T13:41:25.009 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean all 2026-03-09T13:41:25.187 INFO:teuthology.orchestra.run.vm04.stdout:41 files removed 2026-03-09T13:41:25.210 DEBUG:teuthology.orchestra.run.vm04:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd python3-pytest bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-09T13:41:25.217 INFO:teuthology.orchestra.run.vm03.stdout:41 files removed 2026-03-09T13:41:25.249 DEBUG:teuthology.orchestra.run.vm03:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd python3-pytest bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-09T13:41:26.596 INFO:teuthology.orchestra.run.vm04.stdout:ceph packages for x86_64 69 kB/s | 84 kB 00:01 2026-03-09T13:41:26.626 INFO:teuthology.orchestra.run.vm03.stdout:ceph packages for x86_64 72 kB/s | 84 kB 00:01 2026-03-09T13:41:27.548 INFO:teuthology.orchestra.run.vm04.stdout:ceph noarch packages 13 kB/s | 12 kB 00:00 2026-03-09T13:41:27.635 INFO:teuthology.orchestra.run.vm03.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-09T13:41:28.521 INFO:teuthology.orchestra.run.vm04.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-09T13:41:28.609 INFO:teuthology.orchestra.run.vm03.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-09T13:41:29.305 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - BaseOS 12 MB/s | 8.9 MB 00:00 2026-03-09T13:41:30.060 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - BaseOS 6.3 MB/s | 8.9 MB 00:01 2026-03-09T13:41:31.551 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - AppStream 17 MB/s | 27 MB 00:01 2026-03-09T13:41:33.246 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - AppStream 11 MB/s | 27 MB 00:02 2026-03-09T13:41:35.286 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - CRB 10 MB/s | 8.0 MB 00:00 2026-03-09T13:41:36.786 INFO:teuthology.orchestra.run.vm04.stdout:CentOS Stream 9 - Extras packages 32 kB/s | 20 kB 00:00 2026-03-09T13:41:37.582 INFO:teuthology.orchestra.run.vm04.stdout:Extra Packages for Enterprise Linux 29 MB/s | 20 MB 00:00 2026-03-09T13:41:38.254 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - CRB 5.2 MB/s | 8.0 MB 00:01 2026-03-09T13:41:39.633 INFO:teuthology.orchestra.run.vm03.stdout:CentOS Stream 9 - Extras packages 72 kB/s | 20 kB 00:00 2026-03-09T13:41:40.196 INFO:teuthology.orchestra.run.vm03.stdout:Extra Packages for Enterprise Linux 45 MB/s | 20 MB 00:00 2026-03-09T13:41:42.286 INFO:teuthology.orchestra.run.vm04.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-09T13:41:43.737 INFO:teuthology.orchestra.run.vm04.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T13:41:43.737 INFO:teuthology.orchestra.run.vm04.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T13:41:43.742 INFO:teuthology.orchestra.run.vm04.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-09T13:41:43.742 INFO:teuthology.orchestra.run.vm04.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-09T13:41:43.771 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout:Installing: 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytest noarch 6.2.2-7.el9 appstream 519 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout:Upgrading: 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout:Installing dependencies: 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-09T13:41:43.776 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-09T13:41:43.777 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-iniconfig noarch 1.1.1-7.el9 appstream 17 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-pluggy noarch 0.13.1-7.el9 appstream 41 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-py noarch 1.10.0-6.el9 appstream 477 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout:Installing weak dependencies: 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T13:41:43.778 INFO:teuthology.orchestra.run.vm04.stdout:====================================================================================== 2026-03-09T13:41:43.779 INFO:teuthology.orchestra.run.vm04.stdout:Install 138 Packages 2026-03-09T13:41:43.779 INFO:teuthology.orchestra.run.vm04.stdout:Upgrade 2 Packages 2026-03-09T13:41:43.779 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:41:43.779 INFO:teuthology.orchestra.run.vm04.stdout:Total download size: 211 M 2026-03-09T13:41:43.779 INFO:teuthology.orchestra.run.vm04.stdout:Downloading Packages: 2026-03-09T13:41:45.504 INFO:teuthology.orchestra.run.vm04.stdout:(1/140): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 13 kB/s | 6.5 kB 00:00 2026-03-09T13:41:45.671 INFO:teuthology.orchestra.run.vm03.stdout:lab-extras 63 kB/s | 50 kB 00:00 2026-03-09T13:41:46.347 INFO:teuthology.orchestra.run.vm04.stdout:(2/140): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 1.2 MB 00:00 2026-03-09T13:41:46.469 INFO:teuthology.orchestra.run.vm04.stdout:(3/140): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-09T13:41:47.075 INFO:teuthology.orchestra.run.vm04.stdout:(4/140): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 4.0 MB/s | 2.4 MB 00:00 2026-03-09T13:41:47.090 INFO:teuthology.orchestra.run.vm04.stdout:(5/140): ceph-base-19.2.3-678.ge911bdeb.el9.x86 2.7 MB/s | 5.5 MB 00:02 2026-03-09T13:41:47.324 INFO:teuthology.orchestra.run.vm04.stdout:(6/140): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 4.3 MB/s | 1.1 MB 00:00 2026-03-09T13:41:47.408 INFO:teuthology.orchestra.run.vm03.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T13:41:47.409 INFO:teuthology.orchestra.run.vm03.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-09T13:41:47.415 INFO:teuthology.orchestra.run.vm03.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-09T13:41:47.415 INFO:teuthology.orchestra.run.vm03.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-09T13:41:47.453 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T13:41:47.457 INFO:teuthology.orchestra.run.vm03.stdout:====================================================================================== 2026-03-09T13:41:47.457 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout:====================================================================================== 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout:Installing: 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytest noarch 6.2.2-7.el9 appstream 519 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout:Upgrading: 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout:Installing dependencies: 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-09T13:41:47.458 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-iniconfig noarch 1.1.1-7.el9 appstream 17 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-09T13:41:47.459 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-pluggy noarch 0.13.1-7.el9 appstream 41 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-py noarch 1.10.0-6.el9 appstream 477 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout:Installing weak dependencies: 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout:====================================================================================== 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout:Install 138 Packages 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout:Upgrade 2 Packages 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:41:47.460 INFO:teuthology.orchestra.run.vm03.stdout:Total download size: 211 M 2026-03-09T13:41:47.461 INFO:teuthology.orchestra.run.vm03.stdout:Downloading Packages: 2026-03-09T13:41:48.127 INFO:teuthology.orchestra.run.vm04.stdout:(7/140): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 4.6 MB/s | 4.7 MB 00:01 2026-03-09T13:41:49.082 INFO:teuthology.orchestra.run.vm04.stdout:(8/140): ceph-common-19.2.3-678.ge911bdeb.el9.x 5.4 MB/s | 22 MB 00:04 2026-03-09T13:41:49.193 INFO:teuthology.orchestra.run.vm03.stdout:(1/140): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 13 kB/s | 6.5 kB 00:00 2026-03-09T13:41:49.201 INFO:teuthology.orchestra.run.vm04.stdout:(9/140): ceph-selinux-19.2.3-678.ge911bdeb.el9. 211 kB/s | 25 kB 00:00 2026-03-09T13:41:49.522 INFO:teuthology.orchestra.run.vm04.stdout:(10/140): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 7.8 MB/s | 17 MB 00:02 2026-03-09T13:41:49.644 INFO:teuthology.orchestra.run.vm04.stdout:(11/140): libcephfs-devel-19.2.3-678.ge911bdeb. 277 kB/s | 34 kB 00:00 2026-03-09T13:41:49.796 INFO:teuthology.orchestra.run.vm04.stdout:(12/140): libcephfs2-19.2.3-678.ge911bdeb.el9.x 6.4 MB/s | 1.0 MB 00:00 2026-03-09T13:41:49.927 INFO:teuthology.orchestra.run.vm04.stdout:(13/140): libcephsqlite-19.2.3-678.ge911bdeb.el 1.2 MB/s | 163 kB 00:00 2026-03-09T13:41:50.047 INFO:teuthology.orchestra.run.vm03.stdout:(2/140): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.3 MB/s | 1.2 MB 00:00 2026-03-09T13:41:50.049 INFO:teuthology.orchestra.run.vm04.stdout:(14/140): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-09T13:41:50.180 INFO:teuthology.orchestra.run.vm03.stdout:(3/140): ceph-immutable-object-cache-19.2.3-678 1.1 MB/s | 145 kB 00:00 2026-03-09T13:41:50.181 INFO:teuthology.orchestra.run.vm04.stdout:(15/140): libradosstriper1-19.2.3-678.ge911bdeb 3.7 MB/s | 503 kB 00:00 2026-03-09T13:41:50.232 INFO:teuthology.orchestra.run.vm04.stdout:(16/140): ceph-radosgw-19.2.3-678.ge911bdeb.el9 5.1 MB/s | 11 MB 00:02 2026-03-09T13:41:50.348 INFO:teuthology.orchestra.run.vm04.stdout:(17/140): python3-ceph-argparse-19.2.3-678.ge91 388 kB/s | 45 kB 00:00 2026-03-09T13:41:50.465 INFO:teuthology.orchestra.run.vm04.stdout:(18/140): python3-ceph-common-19.2.3-678.ge911b 1.2 MB/s | 142 kB 00:00 2026-03-09T13:41:50.581 INFO:teuthology.orchestra.run.vm04.stdout:(19/140): python3-cephfs-19.2.3-678.ge911bdeb.e 1.4 MB/s | 165 kB 00:00 2026-03-09T13:41:50.680 INFO:teuthology.orchestra.run.vm04.stdout:(20/140): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 11 MB/s | 5.4 MB 00:00 2026-03-09T13:41:50.700 INFO:teuthology.orchestra.run.vm04.stdout:(21/140): python3-rados-19.2.3-678.ge911bdeb.el 2.7 MB/s | 323 kB 00:00 2026-03-09T13:41:50.753 INFO:teuthology.orchestra.run.vm03.stdout:(4/140): ceph-base-19.2.3-678.ge911bdeb.el9.x86 2.7 MB/s | 5.5 MB 00:02 2026-03-09T13:41:50.805 INFO:teuthology.orchestra.run.vm04.stdout:(22/140): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 303 kB 00:00 2026-03-09T13:41:50.815 INFO:teuthology.orchestra.run.vm03.stdout:(5/140): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 3.8 MB/s | 2.4 MB 00:00 2026-03-09T13:41:50.816 INFO:teuthology.orchestra.run.vm04.stdout:(23/140): python3-rgw-19.2.3-678.ge911bdeb.el9. 862 kB/s | 100 kB 00:00 2026-03-09T13:41:50.928 INFO:teuthology.orchestra.run.vm04.stdout:(24/140): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 690 kB/s | 85 kB 00:00 2026-03-09T13:41:50.985 INFO:teuthology.orchestra.run.vm03.stdout:(6/140): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 4.6 MB/s | 1.1 MB 00:00 2026-03-09T13:41:51.051 INFO:teuthology.orchestra.run.vm04.stdout:(25/140): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-09T13:41:51.172 INFO:teuthology.orchestra.run.vm04.stdout:(26/140): ceph-grafana-dashboards-19.2.3-678.ge 258 kB/s | 31 kB 00:00 2026-03-09T13:41:51.294 INFO:teuthology.orchestra.run.vm04.stdout:(27/140): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-09T13:41:51.399 INFO:teuthology.orchestra.run.vm04.stdout:(28/140): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 5.3 MB/s | 3.1 MB 00:00 2026-03-09T13:41:51.791 INFO:teuthology.orchestra.run.vm04.stdout:(29/140): ceph-mgr-dashboard-19.2.3-678.ge911bd 7.7 MB/s | 3.8 MB 00:00 2026-03-09T13:41:51.806 INFO:teuthology.orchestra.run.vm03.stdout:(7/140): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 4.8 MB/s | 4.7 MB 00:00 2026-03-09T13:41:51.915 INFO:teuthology.orchestra.run.vm04.stdout:(30/140): ceph-mgr-modules-core-19.2.3-678.ge91 2.0 MB/s | 253 kB 00:00 2026-03-09T13:41:52.036 INFO:teuthology.orchestra.run.vm04.stdout:(31/140): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 406 kB/s | 49 kB 00:00 2026-03-09T13:41:52.157 INFO:teuthology.orchestra.run.vm04.stdout:(32/140): ceph-prometheus-alerts-19.2.3-678.ge9 139 kB/s | 17 kB 00:00 2026-03-09T13:41:52.282 INFO:teuthology.orchestra.run.vm04.stdout:(33/140): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.3 MB/s | 299 kB 00:00 2026-03-09T13:41:52.412 INFO:teuthology.orchestra.run.vm04.stdout:(34/140): cephadm-19.2.3-678.ge911bdeb.el9.noar 5.8 MB/s | 769 kB 00:00 2026-03-09T13:41:52.711 INFO:teuthology.orchestra.run.vm04.stdout:(35/140): ceph-mgr-diskprediction-local-19.2.3- 5.6 MB/s | 7.4 MB 00:01 2026-03-09T13:41:52.761 INFO:teuthology.orchestra.run.vm03.stdout:(8/140): ceph-common-19.2.3-678.ge911bdeb.el9.x 5.4 MB/s | 22 MB 00:04 2026-03-09T13:41:52.850 INFO:teuthology.orchestra.run.vm04.stdout:(36/140): cryptsetup-2.8.1-3.el9.x86_64.rpm 802 kB/s | 351 kB 00:00 2026-03-09T13:41:52.908 INFO:teuthology.orchestra.run.vm03.stdout:(9/140): ceph-osd-19.2.3-678.ge911bdeb.el9.x86_ 8.9 MB/s | 17 MB 00:01 2026-03-09T13:41:52.909 INFO:teuthology.orchestra.run.vm03.stdout:(10/140): ceph-selinux-19.2.3-678.ge911bdeb.el9 170 kB/s | 25 kB 00:00 2026-03-09T13:41:52.928 INFO:teuthology.orchestra.run.vm04.stdout:(37/140): ledmon-libs-1.1.0-3.el9.x86_64.rpm 187 kB/s | 40 kB 00:00 2026-03-09T13:41:52.928 INFO:teuthology.orchestra.run.vm04.stdout:(38/140): libconfig-1.7.2-9.el9.x86_64.rpm 924 kB/s | 72 kB 00:00 2026-03-09T13:41:53.029 INFO:teuthology.orchestra.run.vm03.stdout:(11/140): libcephfs-devel-19.2.3-678.ge911bdeb. 278 kB/s | 34 kB 00:00 2026-03-09T13:41:53.103 INFO:teuthology.orchestra.run.vm04.stdout:(39/140): libgfortran-11.5.0-14.el9.x86_64.rpm 4.5 MB/s | 794 kB 00:00 2026-03-09T13:41:53.121 INFO:teuthology.orchestra.run.vm04.stdout:(40/140): libquadmath-11.5.0-14.el9.x86_64.rpm 960 kB/s | 184 kB 00:00 2026-03-09T13:41:53.155 INFO:teuthology.orchestra.run.vm04.stdout:(41/140): mailcap-2.1.49-5.el9.noarch.rpm 634 kB/s | 33 kB 00:00 2026-03-09T13:41:53.166 INFO:teuthology.orchestra.run.vm03.stdout:(12/140): libcephfs2-19.2.3-678.ge911bdeb.el9.x 7.2 MB/s | 1.0 MB 00:00 2026-03-09T13:41:53.199 INFO:teuthology.orchestra.run.vm03.stdout:(13/140): ceph-radosgw-19.2.3-678.ge911bdeb.el9 7.7 MB/s | 11 MB 00:01 2026-03-09T13:41:53.218 INFO:teuthology.orchestra.run.vm04.stdout:(42/140): pciutils-3.7.0-7.el9.x86_64.rpm 956 kB/s | 93 kB 00:00 2026-03-09T13:41:53.250 INFO:teuthology.orchestra.run.vm04.stdout:(43/140): python3-cffi-1.14.5-5.el9.x86_64.rpm 2.6 MB/s | 253 kB 00:00 2026-03-09T13:41:53.285 INFO:teuthology.orchestra.run.vm03.stdout:(14/140): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-09T13:41:53.316 INFO:teuthology.orchestra.run.vm04.stdout:(44/140): python3-ply-3.11-14.el9.noarch.rpm 1.6 MB/s | 106 kB 00:00 2026-03-09T13:41:53.322 INFO:teuthology.orchestra.run.vm03.stdout:(15/140): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-09T13:41:53.407 INFO:teuthology.orchestra.run.vm03.stdout:(16/140): libradosstriper1-19.2.3-678.ge911bdeb 4.0 MB/s | 503 kB 00:00 2026-03-09T13:41:53.410 INFO:teuthology.orchestra.run.vm04.stdout:(45/140): python3-pycparser-2.20-6.el9.noarch.r 1.4 MB/s | 135 kB 00:00 2026-03-09T13:41:53.436 INFO:teuthology.orchestra.run.vm04.stdout:(46/140): python3-cryptography-36.0.1-5.el9.x86 5.7 MB/s | 1.2 MB 00:00 2026-03-09T13:41:53.525 INFO:teuthology.orchestra.run.vm03.stdout:(17/140): python3-ceph-argparse-19.2.3-678.ge91 384 kB/s | 45 kB 00:00 2026-03-09T13:41:53.562 INFO:teuthology.orchestra.run.vm04.stdout:(47/140): python3-requests-2.25.1-10.el9.noarch 832 kB/s | 126 kB 00:00 2026-03-09T13:41:53.586 INFO:teuthology.orchestra.run.vm04.stdout:(48/140): python3-urllib3-1.26.5-7.el9.noarch.r 1.4 MB/s | 218 kB 00:00 2026-03-09T13:41:53.643 INFO:teuthology.orchestra.run.vm03.stdout:(18/140): python3-ceph-common-19.2.3-678.ge911b 1.2 MB/s | 142 kB 00:00 2026-03-09T13:41:53.658 INFO:teuthology.orchestra.run.vm04.stdout:(49/140): zip-3.0-35.el9.x86_64.rpm 3.6 MB/s | 266 kB 00:00 2026-03-09T13:41:53.699 INFO:teuthology.orchestra.run.vm04.stdout:(50/140): unzip-6.0-59.el9.x86_64.rpm 1.3 MB/s | 182 kB 00:00 2026-03-09T13:41:53.701 INFO:teuthology.orchestra.run.vm04.stdout:(51/140): boost-program-options-1.75.0-13.el9.x 2.3 MB/s | 104 kB 00:00 2026-03-09T13:41:53.752 INFO:teuthology.orchestra.run.vm04.stdout:(52/140): flexiblas-3.0.4-9.el9.x86_64.rpm 559 kB/s | 30 kB 00:00 2026-03-09T13:41:53.762 INFO:teuthology.orchestra.run.vm03.stdout:(19/140): python3-cephfs-19.2.3-678.ge911bdeb.e 1.4 MB/s | 165 kB 00:00 2026-03-09T13:41:53.830 INFO:teuthology.orchestra.run.vm03.stdout:(20/140): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 11 MB/s | 5.4 MB 00:00 2026-03-09T13:41:53.884 INFO:teuthology.orchestra.run.vm03.stdout:(21/140): python3-rados-19.2.3-678.ge911bdeb.el 2.6 MB/s | 323 kB 00:00 2026-03-09T13:41:53.954 INFO:teuthology.orchestra.run.vm03.stdout:(22/140): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 303 kB 00:00 2026-03-09T13:41:53.992 INFO:teuthology.orchestra.run.vm04.stdout:(53/140): flexiblas-openblas-openmp-3.0.4-9.el9 62 kB/s | 15 kB 00:00 2026-03-09T13:41:54.002 INFO:teuthology.orchestra.run.vm03.stdout:(23/140): python3-rgw-19.2.3-678.ge911bdeb.el9. 842 kB/s | 100 kB 00:00 2026-03-09T13:41:54.010 INFO:teuthology.orchestra.run.vm04.stdout:(54/140): flexiblas-netlib-3.0.4-9.el9.x86_64.r 9.7 MB/s | 3.0 MB 00:00 2026-03-09T13:41:54.076 INFO:teuthology.orchestra.run.vm03.stdout:(24/140): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 700 kB/s | 85 kB 00:00 2026-03-09T13:41:54.086 INFO:teuthology.orchestra.run.vm04.stdout:(55/140): libnbd-1.20.3-4.el9.x86_64.rpm 1.7 MB/s | 164 kB 00:00 2026-03-09T13:41:54.145 INFO:teuthology.orchestra.run.vm04.stdout:(56/140): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.2 MB/s | 160 kB 00:00 2026-03-09T13:41:54.169 INFO:teuthology.orchestra.run.vm04.stdout:(57/140): librabbitmq-0.11.0-7.el9.x86_64.rpm 548 kB/s | 45 kB 00:00 2026-03-09T13:41:54.199 INFO:teuthology.orchestra.run.vm03.stdout:(25/140): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-09T13:41:54.290 INFO:teuthology.orchestra.run.vm04.stdout:(58/140): librdkafka-1.6.1-102.el9.x86_64.rpm 4.5 MB/s | 662 kB 00:00 2026-03-09T13:41:54.321 INFO:teuthology.orchestra.run.vm03.stdout:(26/140): ceph-grafana-dashboards-19.2.3-678.ge 255 kB/s | 31 kB 00:00 2026-03-09T13:41:54.360 INFO:teuthology.orchestra.run.vm03.stdout:(27/140): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 8.7 MB/s | 3.1 MB 00:00 2026-03-09T13:41:54.376 INFO:teuthology.orchestra.run.vm04.stdout:(59/140): libstoragemgmt-1.10.1-1.el9.x86_64.rp 1.2 MB/s | 246 kB 00:00 2026-03-09T13:41:54.444 INFO:teuthology.orchestra.run.vm03.stdout:(28/140): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-09T13:41:54.547 INFO:teuthology.orchestra.run.vm04.stdout:(60/140): ceph-test-19.2.3-678.ge911bdeb.el9.x8 9.3 MB/s | 50 MB 00:05 2026-03-09T13:41:54.552 INFO:teuthology.orchestra.run.vm04.stdout:(61/140): libxslt-1.1.34-12.el9.x86_64.rpm 889 kB/s | 233 kB 00:00 2026-03-09T13:41:54.573 INFO:teuthology.orchestra.run.vm04.stdout:(62/140): lttng-ust-2.12.0-6.el9.x86_64.rpm 1.5 MB/s | 292 kB 00:00 2026-03-09T13:41:54.608 INFO:teuthology.orchestra.run.vm04.stdout:(63/140): openblas-0.3.29-1.el9.x86_64.rpm 758 kB/s | 42 kB 00:00 2026-03-09T13:41:54.626 INFO:teuthology.orchestra.run.vm04.stdout:(64/140): lua-5.4.4-4.el9.x86_64.rpm 2.4 MB/s | 188 kB 00:00 2026-03-09T13:41:54.727 INFO:teuthology.orchestra.run.vm03.stdout:(29/140): ceph-mgr-dashboard-19.2.3-678.ge911bd 10 MB/s | 3.8 MB 00:00 2026-03-09T13:41:54.849 INFO:teuthology.orchestra.run.vm03.stdout:(30/140): ceph-mgr-modules-core-19.2.3-678.ge91 2.0 MB/s | 253 kB 00:00 2026-03-09T13:41:54.961 INFO:teuthology.orchestra.run.vm04.stdout:(65/140): python3-babel-2.9.1-2.el9.noarch.rpm 18 MB/s | 6.0 MB 00:00 2026-03-09T13:41:54.967 INFO:teuthology.orchestra.run.vm03.stdout:(31/140): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 416 kB/s | 49 kB 00:00 2026-03-09T13:41:55.035 INFO:teuthology.orchestra.run.vm04.stdout:(66/140): protobuf-3.14.0-17.el9.x86_64.rpm 2.4 MB/s | 1.0 MB 00:00 2026-03-09T13:41:55.062 INFO:teuthology.orchestra.run.vm04.stdout:(67/140): python3-devel-3.9.25-3.el9.x86_64.rpm 2.4 MB/s | 244 kB 00:00 2026-03-09T13:41:55.063 INFO:teuthology.orchestra.run.vm04.stdout:(68/140): python3-iniconfig-1.1.1-7.el9.noarch. 625 kB/s | 17 kB 00:00 2026-03-09T13:41:55.087 INFO:teuthology.orchestra.run.vm03.stdout:(32/140): ceph-prometheus-alerts-19.2.3-678.ge9 141 kB/s | 17 kB 00:00 2026-03-09T13:41:55.087 INFO:teuthology.orchestra.run.vm04.stdout:(69/140): openblas-openmp-0.3.29-1.el9.x86_64.r 10 MB/s | 5.3 MB 00:00 2026-03-09T13:41:55.091 INFO:teuthology.orchestra.run.vm04.stdout:(70/140): python3-jmespath-1.0.1-1.el9.noarch.r 1.7 MB/s | 48 kB 00:00 2026-03-09T13:41:55.147 INFO:teuthology.orchestra.run.vm04.stdout:(71/140): python3-jinja2-2.11.3-8.el9.noarch.rp 2.9 MB/s | 249 kB 00:00 2026-03-09T13:41:55.182 INFO:teuthology.orchestra.run.vm04.stdout:(72/140): python3-markupsafe-1.1.1-12.el9.x86_6 994 kB/s | 35 kB 00:00 2026-03-09T13:41:55.201 INFO:teuthology.orchestra.run.vm03.stdout:(33/140): ceph-mgr-diskprediction-local-19.2.3- 9.8 MB/s | 7.4 MB 00:00 2026-03-09T13:41:55.201 INFO:teuthology.orchestra.run.vm04.stdout:(73/140): python3-libstoragemgmt-1.10.1-1.el9.x 1.5 MB/s | 177 kB 00:00 2026-03-09T13:41:55.208 INFO:teuthology.orchestra.run.vm03.stdout:(34/140): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 299 kB 00:00 2026-03-09T13:41:55.275 INFO:teuthology.orchestra.run.vm04.stdout:(74/140): python3-mako-1.1.4-6.el9.noarch.rpm 937 kB/s | 172 kB 00:00 2026-03-09T13:41:55.325 INFO:teuthology.orchestra.run.vm04.stdout:(75/140): python3-numpy-1.23.5-2.el9.x86_64.rpm 43 MB/s | 6.1 MB 00:00 2026-03-09T13:41:55.331 INFO:teuthology.orchestra.run.vm04.stdout:(76/140): python3-numpy-f2py-1.23.5-2.el9.x86_6 3.3 MB/s | 442 kB 00:00 2026-03-09T13:41:55.337 INFO:teuthology.orchestra.run.vm03.stdout:(35/140): cephadm-19.2.3-678.ge911bdeb.el9.noar 5.5 MB/s | 769 kB 00:00 2026-03-09T13:41:55.341 INFO:teuthology.orchestra.run.vm04.stdout:(77/140): python3-packaging-20.9-5.el9.noarch.r 1.1 MB/s | 77 kB 00:00 2026-03-09T13:41:55.342 INFO:teuthology.orchestra.run.vm04.stdout:(78/140): python3-pluggy-0.13.1-7.el9.noarch.rp 2.4 MB/s | 41 kB 00:00 2026-03-09T13:41:55.376 INFO:teuthology.orchestra.run.vm04.stdout:(79/140): python3-pyasn1-0.4.8-7.el9.noarch.rpm 4.6 MB/s | 157 kB 00:00 2026-03-09T13:41:55.381 INFO:teuthology.orchestra.run.vm04.stdout:(80/140): python3-protobuf-3.14.0-17.el9.noarch 5.3 MB/s | 267 kB 00:00 2026-03-09T13:41:55.422 INFO:teuthology.orchestra.run.vm04.stdout:(81/140): python3-py-1.10.0-6.el9.noarch.rpm 5.9 MB/s | 477 kB 00:00 2026-03-09T13:41:55.424 INFO:teuthology.orchestra.run.vm04.stdout:(82/140): python3-pyasn1-modules-0.4.8-7.el9.no 5.6 MB/s | 277 kB 00:00 2026-03-09T13:41:55.466 INFO:teuthology.orchestra.run.vm04.stdout:(83/140): python3-pytest-6.2.2-7.el9.noarch.rpm 6.0 MB/s | 519 kB 00:00 2026-03-09T13:41:55.489 INFO:teuthology.orchestra.run.vm04.stdout:(84/140): python3-requests-oauthlib-1.3.0-12.el 795 kB/s | 54 kB 00:00 2026-03-09T13:41:55.494 INFO:teuthology.orchestra.run.vm03.stdout:(36/140): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.2 MB/s | 351 kB 00:00 2026-03-09T13:41:55.513 INFO:teuthology.orchestra.run.vm03.stdout:(37/140): ledmon-libs-1.1.0-3.el9.x86_64.rpm 231 kB/s | 40 kB 00:00 2026-03-09T13:41:55.514 INFO:teuthology.orchestra.run.vm04.stdout:(85/140): python3-toml-0.10.2-6.el9.noarch.rpm 869 kB/s | 42 kB 00:00 2026-03-09T13:41:55.549 INFO:teuthology.orchestra.run.vm04.stdout:(86/140): qatlib-25.08.0-2.el9.x86_64.rpm 3.9 MB/s | 240 kB 00:00 2026-03-09T13:41:55.578 INFO:teuthology.orchestra.run.vm04.stdout:(87/140): qatzip-libs-1.3.1-1.el9.x86_64.rpm 2.3 MB/s | 66 kB 00:00 2026-03-09T13:41:55.579 INFO:teuthology.orchestra.run.vm03.stdout:(38/140): libconfig-1.7.2-9.el9.x86_64.rpm 843 kB/s | 72 kB 00:00 2026-03-09T13:41:55.599 INFO:teuthology.orchestra.run.vm04.stdout:(88/140): qatlib-service-25.08.0-2.el9.x86_64.r 435 kB/s | 37 kB 00:00 2026-03-09T13:41:55.680 INFO:teuthology.orchestra.run.vm04.stdout:(89/140): socat-1.7.4.1-8.el9.x86_64.rpm 2.9 MB/s | 303 kB 00:00 2026-03-09T13:41:55.697 INFO:teuthology.orchestra.run.vm04.stdout:(90/140): xmlstarlet-1.6.1-20.el9.x86_64.rpm 650 kB/s | 64 kB 00:00 2026-03-09T13:41:55.760 INFO:teuthology.orchestra.run.vm04.stdout:(91/140): python3-scipy-1.9.3-2.el9.x86_64.rpm 57 MB/s | 19 MB 00:00 2026-03-09T13:41:55.781 INFO:teuthology.orchestra.run.vm04.stdout:(92/140): abseil-cpp-20211102.0-4.el9.x86_64.rp 25 MB/s | 551 kB 00:00 2026-03-09T13:41:55.791 INFO:teuthology.orchestra.run.vm04.stdout:(93/140): gperftools-libs-2.9.1-3.el9.x86_64.rp 32 MB/s | 308 kB 00:00 2026-03-09T13:41:55.793 INFO:teuthology.orchestra.run.vm04.stdout:(94/140): grpc-data-1.46.7-10.el9.noarch.rpm 8.3 MB/s | 19 kB 00:00 2026-03-09T13:41:55.855 INFO:teuthology.orchestra.run.vm04.stdout:(95/140): libarrow-9.0.0-15.el9.x86_64.rpm 72 MB/s | 4.4 MB 00:00 2026-03-09T13:41:55.858 INFO:teuthology.orchestra.run.vm04.stdout:(96/140): libarrow-doc-9.0.0-15.el9.noarch.rpm 10 MB/s | 25 kB 00:00 2026-03-09T13:41:55.861 INFO:teuthology.orchestra.run.vm04.stdout:(97/140): liboath-2.6.12-1.el9.x86_64.rpm 17 MB/s | 49 kB 00:00 2026-03-09T13:41:55.864 INFO:teuthology.orchestra.run.vm04.stdout:(98/140): libunwind-1.6.2-1.el9.x86_64.rpm 24 MB/s | 67 kB 00:00 2026-03-09T13:41:55.867 INFO:teuthology.orchestra.run.vm04.stdout:(99/140): luarocks-3.9.2-5.el9.noarch.rpm 40 MB/s | 151 kB 00:00 2026-03-09T13:41:55.871 INFO:teuthology.orchestra.run.vm03.stdout:(39/140): libgfortran-11.5.0-14.el9.x86_64.rpm 2.2 MB/s | 794 kB 00:00 2026-03-09T13:41:55.879 INFO:teuthology.orchestra.run.vm04.stdout:(100/140): parquet-libs-9.0.0-15.el9.x86_64.rpm 74 MB/s | 838 kB 00:00 2026-03-09T13:41:55.887 INFO:teuthology.orchestra.run.vm04.stdout:(101/140): python3-asyncssh-2.13.2-5.el9.noarch 63 MB/s | 548 kB 00:00 2026-03-09T13:41:55.891 INFO:teuthology.orchestra.run.vm04.stdout:(102/140): python3-autocommand-2.2.2-8.el9.noar 10 MB/s | 29 kB 00:00 2026-03-09T13:41:55.894 INFO:teuthology.orchestra.run.vm04.stdout:(103/140): python3-backports-tarfile-1.2.0-1.el 21 MB/s | 60 kB 00:00 2026-03-09T13:41:55.897 INFO:teuthology.orchestra.run.vm04.stdout:(104/140): python3-bcrypt-3.2.2-1.el9.x86_64.rp 14 MB/s | 43 kB 00:00 2026-03-09T13:41:55.900 INFO:teuthology.orchestra.run.vm04.stdout:(105/140): python3-cachetools-4.2.4-1.el9.noarc 12 MB/s | 32 kB 00:00 2026-03-09T13:41:55.903 INFO:teuthology.orchestra.run.vm04.stdout:(106/140): python3-certifi-2023.05.07-4.el9.noa 4.9 MB/s | 14 kB 00:00 2026-03-09T13:41:55.907 INFO:teuthology.orchestra.run.vm04.stdout:(107/140): python3-cheroot-10.0.1-4.el9.noarch. 42 MB/s | 173 kB 00:00 2026-03-09T13:41:55.913 INFO:teuthology.orchestra.run.vm04.stdout:(108/140): python3-cherrypy-18.6.1-2.el9.noarch 58 MB/s | 358 kB 00:00 2026-03-09T13:41:55.918 INFO:teuthology.orchestra.run.vm04.stdout:(109/140): python3-google-auth-2.45.0-1.el9.noa 54 MB/s | 254 kB 00:00 2026-03-09T13:41:55.926 INFO:teuthology.orchestra.run.vm03.stdout:(40/140): mailcap-2.1.49-5.el9.noarch.rpm 599 kB/s | 33 kB 00:00 2026-03-09T13:41:55.938 INFO:teuthology.orchestra.run.vm03.stdout:(41/140): libquadmath-11.5.0-14.el9.x86_64.rpm 515 kB/s | 184 kB 00:00 2026-03-09T13:41:55.945 INFO:teuthology.orchestra.run.vm04.stdout:(110/140): python3-grpcio-1.46.7-10.el9.x86_64. 79 MB/s | 2.0 MB 00:00 2026-03-09T13:41:55.948 INFO:teuthology.orchestra.run.vm04.stdout:(111/140): python3-grpcio-tools-1.46.7-10.el9.x 40 MB/s | 144 kB 00:00 2026-03-09T13:41:55.951 INFO:teuthology.orchestra.run.vm04.stdout:(112/140): python3-jaraco-8.2.1-3.el9.noarch.rp 4.6 MB/s | 11 kB 00:00 2026-03-09T13:41:55.953 INFO:teuthology.orchestra.run.vm04.stdout:(113/140): python3-jaraco-classes-3.2.1-5.el9.n 7.6 MB/s | 18 kB 00:00 2026-03-09T13:41:55.955 INFO:teuthology.orchestra.run.vm04.stdout:(114/140): python3-jaraco-collections-3.0.0-8.e 9.9 MB/s | 23 kB 00:00 2026-03-09T13:41:55.958 INFO:teuthology.orchestra.run.vm04.stdout:(115/140): python3-jaraco-context-6.0.1-3.el9.n 8.3 MB/s | 20 kB 00:00 2026-03-09T13:41:55.961 INFO:teuthology.orchestra.run.vm04.stdout:(116/140): python3-jaraco-functools-3.5.0-2.el9 7.7 MB/s | 19 kB 00:00 2026-03-09T13:41:55.966 INFO:teuthology.orchestra.run.vm04.stdout:(117/140): python3-jaraco-text-4.0.0-2.el9.noar 5.8 MB/s | 26 kB 00:00 2026-03-09T13:41:55.982 INFO:teuthology.orchestra.run.vm04.stdout:(118/140): python3-kubernetes-26.1.0-3.el9.noar 62 MB/s | 1.0 MB 00:00 2026-03-09T13:41:55.985 INFO:teuthology.orchestra.run.vm04.stdout:(119/140): python3-logutils-0.3.5-21.el9.noarch 18 MB/s | 46 kB 00:00 2026-03-09T13:41:55.988 INFO:teuthology.orchestra.run.vm04.stdout:(120/140): python3-more-itertools-8.12.0-2.el9. 28 MB/s | 79 kB 00:00 2026-03-09T13:41:55.991 INFO:teuthology.orchestra.run.vm04.stdout:(121/140): python3-natsort-7.1.1-5.el9.noarch.r 21 MB/s | 58 kB 00:00 2026-03-09T13:41:55.996 INFO:teuthology.orchestra.run.vm04.stdout:(122/140): python3-pecan-1.4.2-3.el9.noarch.rpm 53 MB/s | 272 kB 00:00 2026-03-09T13:41:56.000 INFO:teuthology.orchestra.run.vm04.stdout:(123/140): python3-portend-3.1.0-2.el9.noarch.r 3.8 MB/s | 16 kB 00:00 2026-03-09T13:41:56.004 INFO:teuthology.orchestra.run.vm04.stdout:(124/140): python3-pyOpenSSL-21.0.0-1.el9.noarc 21 MB/s | 90 kB 00:00 2026-03-09T13:41:56.007 INFO:teuthology.orchestra.run.vm04.stdout:(125/140): python3-repoze-lru-0.7-16.el9.noarch 13 MB/s | 31 kB 00:00 2026-03-09T13:41:56.012 INFO:teuthology.orchestra.run.vm03.stdout:(42/140): pciutils-3.7.0-7.el9.x86_64.rpm 1.1 MB/s | 93 kB 00:00 2026-03-09T13:41:56.012 INFO:teuthology.orchestra.run.vm04.stdout:(126/140): python3-routes-2.5.1-5.el9.noarch.rp 38 MB/s | 188 kB 00:00 2026-03-09T13:41:56.015 INFO:teuthology.orchestra.run.vm04.stdout:(127/140): python3-rsa-4.9-2.el9.noarch.rpm 17 MB/s | 59 kB 00:00 2026-03-09T13:41:56.018 INFO:teuthology.orchestra.run.vm04.stdout:(128/140): python3-tempora-5.0.0-2.el9.noarch.r 12 MB/s | 36 kB 00:00 2026-03-09T13:41:56.021 INFO:teuthology.orchestra.run.vm04.stdout:(129/140): python3-typing-extensions-4.15.0-1.e 29 MB/s | 86 kB 00:00 2026-03-09T13:41:56.026 INFO:teuthology.orchestra.run.vm04.stdout:(130/140): python3-webob-1.8.8-2.el9.noarch.rpm 47 MB/s | 230 kB 00:00 2026-03-09T13:41:56.029 INFO:teuthology.orchestra.run.vm03.stdout:(43/140): python3-cffi-1.14.5-5.el9.x86_64.rpm 2.7 MB/s | 253 kB 00:00 2026-03-09T13:41:56.030 INFO:teuthology.orchestra.run.vm04.stdout:(131/140): python3-websocket-client-1.2.3-2.el9 29 MB/s | 90 kB 00:00 2026-03-09T13:41:56.036 INFO:teuthology.orchestra.run.vm04.stdout:(132/140): python3-werkzeug-2.0.3-3.el9.1.noarc 63 MB/s | 427 kB 00:00 2026-03-09T13:41:56.039 INFO:teuthology.orchestra.run.vm04.stdout:(133/140): python3-xmltodict-0.12.0-15.el9.noar 9.5 MB/s | 22 kB 00:00 2026-03-09T13:41:56.041 INFO:teuthology.orchestra.run.vm04.stdout:(134/140): python3-zc-lockfile-2.0-10.el9.noarc 8.8 MB/s | 20 kB 00:00 2026-03-09T13:41:56.046 INFO:teuthology.orchestra.run.vm04.stdout:(135/140): re2-20211101-20.el9.x86_64.rpm 42 MB/s | 191 kB 00:00 2026-03-09T13:41:56.068 INFO:teuthology.orchestra.run.vm04.stdout:(136/140): thrift-0.15.0-4.el9.x86_64.rpm 74 MB/s | 1.6 MB 00:00 2026-03-09T13:41:56.110 INFO:teuthology.orchestra.run.vm03.stdout:(44/140): python3-ply-3.11-14.el9.noarch.rpm 1.3 MB/s | 106 kB 00:00 2026-03-09T13:41:56.188 INFO:teuthology.orchestra.run.vm04.stdout:(137/140): lua-devel-5.4.4-4.el9.x86_64.rpm 44 kB/s | 22 kB 00:00 2026-03-09T13:41:56.190 INFO:teuthology.orchestra.run.vm03.stdout:(45/140): python3-pycparser-2.20-6.el9.noarch.r 1.7 MB/s | 135 kB 00:00 2026-03-09T13:41:56.251 INFO:teuthology.orchestra.run.vm03.stdout:(46/140): python3-requests-2.25.1-10.el9.noarch 2.0 MB/s | 126 kB 00:00 2026-03-09T13:41:56.317 INFO:teuthology.orchestra.run.vm03.stdout:(47/140): python3-urllib3-1.26.5-7.el9.noarch.r 3.2 MB/s | 218 kB 00:00 2026-03-09T13:41:56.368 INFO:teuthology.orchestra.run.vm03.stdout:(48/140): python3-cryptography-36.0.1-5.el9.x86 3.5 MB/s | 1.2 MB 00:00 2026-03-09T13:41:56.399 INFO:teuthology.orchestra.run.vm03.stdout:(49/140): unzip-6.0-59.el9.x86_64.rpm 2.2 MB/s | 182 kB 00:00 2026-03-09T13:41:56.485 INFO:teuthology.orchestra.run.vm03.stdout:(50/140): zip-3.0-35.el9.x86_64.rpm 2.2 MB/s | 266 kB 00:00 2026-03-09T13:41:56.803 INFO:teuthology.orchestra.run.vm03.stdout:(51/140): boost-program-options-1.75.0-13.el9.x 258 kB/s | 104 kB 00:00 2026-03-09T13:41:56.866 INFO:teuthology.orchestra.run.vm03.stdout:(52/140): flexiblas-3.0.4-9.el9.x86_64.rpm 78 kB/s | 30 kB 00:00 2026-03-09T13:41:56.964 INFO:teuthology.orchestra.run.vm03.stdout:(53/140): flexiblas-openblas-openmp-3.0.4-9.el9 153 kB/s | 15 kB 00:00 2026-03-09T13:41:57.155 INFO:teuthology.orchestra.run.vm04.stdout:(138/140): librados2-19.2.3-678.ge911bdeb.el9.x 3.2 MB/s | 3.4 MB 00:01 2026-03-09T13:41:57.214 INFO:teuthology.orchestra.run.vm03.stdout:(54/140): ceph-test-19.2.3-678.ge911bdeb.el9.x8 12 MB/s | 50 MB 00:04 2026-03-09T13:41:57.251 INFO:teuthology.orchestra.run.vm03.stdout:(55/140): libnbd-1.20.3-4.el9.x86_64.rpm 570 kB/s | 164 kB 00:00 2026-03-09T13:41:57.334 INFO:teuthology.orchestra.run.vm04.stdout:(139/140): librbd1-19.2.3-678.ge911bdeb.el9.x86 2.8 MB/s | 3.2 MB 00:01 2026-03-09T13:41:57.403 INFO:teuthology.orchestra.run.vm03.stdout:(56/140): librabbitmq-0.11.0-7.el9.x86_64.rpm 299 kB/s | 45 kB 00:00 2026-03-09T13:41:57.685 INFO:teuthology.orchestra.run.vm03.stdout:(57/140): librdkafka-1.6.1-102.el9.x86_64.rpm 2.3 MB/s | 662 kB 00:00 2026-03-09T13:41:57.702 INFO:teuthology.orchestra.run.vm03.stdout:(58/140): libpmemobj-1.12.1-1.el9.x86_64.rpm 329 kB/s | 160 kB 00:00 2026-03-09T13:41:57.799 INFO:teuthology.orchestra.run.vm03.stdout:(59/140): libstoragemgmt-1.10.1-1.el9.x86_64.rp 2.1 MB/s | 246 kB 00:00 2026-03-09T13:41:57.856 INFO:teuthology.orchestra.run.vm03.stdout:(60/140): libxslt-1.1.34-12.el9.x86_64.rpm 1.5 MB/s | 233 kB 00:00 2026-03-09T13:41:57.961 INFO:teuthology.orchestra.run.vm03.stdout:(61/140): lttng-ust-2.12.0-6.el9.x86_64.rpm 1.8 MB/s | 292 kB 00:00 2026-03-09T13:41:57.962 INFO:teuthology.orchestra.run.vm03.stdout:(62/140): lua-5.4.4-4.el9.x86_64.rpm 1.7 MB/s | 188 kB 00:00 2026-03-09T13:41:58.075 INFO:teuthology.orchestra.run.vm03.stdout:(63/140): openblas-0.3.29-1.el9.x86_64.rpm 369 kB/s | 42 kB 00:00 2026-03-09T13:41:58.123 INFO:teuthology.orchestra.run.vm03.stdout:(64/140): flexiblas-netlib-3.0.4-9.el9.x86_64.r 2.3 MB/s | 3.0 MB 00:01 2026-03-09T13:41:58.286 INFO:teuthology.orchestra.run.vm03.stdout:(65/140): protobuf-3.14.0-17.el9.x86_64.rpm 4.8 MB/s | 1.0 MB 00:00 2026-03-09T13:41:58.375 INFO:teuthology.orchestra.run.vm03.stdout:(66/140): python3-devel-3.9.25-3.el9.x86_64.rpm 2.7 MB/s | 244 kB 00:00 2026-03-09T13:41:58.501 INFO:teuthology.orchestra.run.vm03.stdout:(67/140): python3-iniconfig-1.1.1-7.el9.noarch. 139 kB/s | 17 kB 00:00 2026-03-09T13:41:58.516 INFO:teuthology.orchestra.run.vm03.stdout:(68/140): python3-babel-2.9.1-2.el9.noarch.rpm 15 MB/s | 6.0 MB 00:00 2026-03-09T13:41:58.614 INFO:teuthology.orchestra.run.vm03.stdout:(69/140): python3-jinja2-2.11.3-8.el9.noarch.rp 2.1 MB/s | 249 kB 00:00 2026-03-09T13:41:58.632 INFO:teuthology.orchestra.run.vm03.stdout:(70/140): python3-jmespath-1.0.1-1.el9.noarch.r 411 kB/s | 48 kB 00:00 2026-03-09T13:41:59.091 INFO:teuthology.orchestra.run.vm03.stdout:(71/140): python3-libstoragemgmt-1.10.1-1.el9.x 371 kB/s | 177 kB 00:00 2026-03-09T13:41:59.124 INFO:teuthology.orchestra.run.vm03.stdout:(72/140): python3-mako-1.1.4-6.el9.noarch.rpm 350 kB/s | 172 kB 00:00 2026-03-09T13:41:59.208 INFO:teuthology.orchestra.run.vm03.stdout:(73/140): python3-markupsafe-1.1.1-12.el9.x86_6 297 kB/s | 35 kB 00:00 2026-03-09T13:41:59.360 INFO:teuthology.orchestra.run.vm03.stdout:(74/140): python3-numpy-f2py-1.23.5-2.el9.x86_6 2.8 MB/s | 442 kB 00:00 2026-03-09T13:41:59.507 INFO:teuthology.orchestra.run.vm03.stdout:(75/140): openblas-openmp-0.3.29-1.el9.x86_64.r 3.4 MB/s | 5.3 MB 00:01 2026-03-09T13:41:59.508 INFO:teuthology.orchestra.run.vm03.stdout:(76/140): python3-packaging-20.9-5.el9.noarch.r 523 kB/s | 77 kB 00:00 2026-03-09T13:41:59.637 INFO:teuthology.orchestra.run.vm03.stdout:(77/140): python3-protobuf-3.14.0-17.el9.noarch 2.0 MB/s | 267 kB 00:00 2026-03-09T13:41:59.641 INFO:teuthology.orchestra.run.vm03.stdout:(78/140): python3-pluggy-0.13.1-7.el9.noarch.rp 309 kB/s | 41 kB 00:00 2026-03-09T13:41:59.780 INFO:teuthology.orchestra.run.vm03.stdout:(79/140): python3-pyasn1-0.4.8-7.el9.noarch.rpm 1.1 MB/s | 157 kB 00:00 2026-03-09T13:41:59.781 INFO:teuthology.orchestra.run.vm03.stdout:(80/140): python3-py-1.10.0-6.el9.noarch.rpm 3.2 MB/s | 477 kB 00:00 2026-03-09T13:41:59.873 INFO:teuthology.orchestra.run.vm03.stdout:(81/140): python3-numpy-1.23.5-2.el9.x86_64.rpm 8.2 MB/s | 6.1 MB 00:00 2026-03-09T13:41:59.907 INFO:teuthology.orchestra.run.vm03.stdout:(82/140): python3-pyasn1-modules-0.4.8-7.el9.no 2.1 MB/s | 277 kB 00:00 2026-03-09T13:41:59.930 INFO:teuthology.orchestra.run.vm03.stdout:(83/140): python3-pytest-6.2.2-7.el9.noarch.rpm 3.4 MB/s | 519 kB 00:00 2026-03-09T13:42:00.007 INFO:teuthology.orchestra.run.vm03.stdout:(84/140): python3-requests-oauthlib-1.3.0-12.el 400 kB/s | 54 kB 00:00 2026-03-09T13:42:00.100 INFO:teuthology.orchestra.run.vm03.stdout:(85/140): python3-toml-0.10.2-6.el9.noarch.rpm 245 kB/s | 42 kB 00:00 2026-03-09T13:42:00.137 INFO:teuthology.orchestra.run.vm03.stdout:(86/140): qatlib-25.08.0-2.el9.x86_64.rpm 1.8 MB/s | 240 kB 00:00 2026-03-09T13:42:00.203 INFO:teuthology.orchestra.run.vm03.stdout:(87/140): qatlib-service-25.08.0-2.el9.x86_64.r 359 kB/s | 37 kB 00:00 2026-03-09T13:42:00.261 INFO:teuthology.orchestra.run.vm03.stdout:(88/140): qatzip-libs-1.3.1-1.el9.x86_64.rpm 537 kB/s | 66 kB 00:00 2026-03-09T13:42:00.368 INFO:teuthology.orchestra.run.vm03.stdout:(89/140): xmlstarlet-1.6.1-20.el9.x86_64.rpm 596 kB/s | 64 kB 00:00 2026-03-09T13:42:00.423 INFO:teuthology.orchestra.run.vm03.stdout:(90/140): socat-1.7.4.1-8.el9.x86_64.rpm 1.4 MB/s | 303 kB 00:00 2026-03-09T13:42:00.764 INFO:teuthology.orchestra.run.vm03.stdout:(91/140): python3-scipy-1.9.3-2.el9.x86_64.rpm 22 MB/s | 19 MB 00:00 2026-03-09T13:42:00.778 INFO:teuthology.orchestra.run.vm03.stdout:(92/140): abseil-cpp-20211102.0-4.el9.x86_64.rp 39 MB/s | 551 kB 00:00 2026-03-09T13:42:00.784 INFO:teuthology.orchestra.run.vm03.stdout:(93/140): gperftools-libs-2.9.1-3.el9.x86_64.rp 53 MB/s | 308 kB 00:00 2026-03-09T13:42:00.786 INFO:teuthology.orchestra.run.vm03.stdout:(94/140): grpc-data-1.46.7-10.el9.noarch.rpm 9.7 MB/s | 19 kB 00:00 2026-03-09T13:42:00.840 INFO:teuthology.orchestra.run.vm03.stdout:(95/140): libarrow-9.0.0-15.el9.x86_64.rpm 83 MB/s | 4.4 MB 00:00 2026-03-09T13:42:00.843 INFO:teuthology.orchestra.run.vm03.stdout:(96/140): libarrow-doc-9.0.0-15.el9.noarch.rpm 11 MB/s | 25 kB 00:00 2026-03-09T13:42:00.847 INFO:teuthology.orchestra.run.vm03.stdout:(97/140): liboath-2.6.12-1.el9.x86_64.rpm 14 MB/s | 49 kB 00:00 2026-03-09T13:42:00.850 INFO:teuthology.orchestra.run.vm03.stdout:(98/140): libunwind-1.6.2-1.el9.x86_64.rpm 26 MB/s | 67 kB 00:00 2026-03-09T13:42:00.853 INFO:teuthology.orchestra.run.vm03.stdout:(99/140): luarocks-3.9.2-5.el9.noarch.rpm 42 MB/s | 151 kB 00:00 2026-03-09T13:42:00.865 INFO:teuthology.orchestra.run.vm03.stdout:(100/140): parquet-libs-9.0.0-15.el9.x86_64.rpm 73 MB/s | 838 kB 00:00 2026-03-09T13:42:00.873 INFO:teuthology.orchestra.run.vm03.stdout:(101/140): python3-asyncssh-2.13.2-5.el9.noarch 69 MB/s | 548 kB 00:00 2026-03-09T13:42:00.875 INFO:teuthology.orchestra.run.vm03.stdout:(102/140): python3-autocommand-2.2.2-8.el9.noar 14 MB/s | 29 kB 00:00 2026-03-09T13:42:00.878 INFO:teuthology.orchestra.run.vm03.stdout:(103/140): python3-backports-tarfile-1.2.0-1.el 24 MB/s | 60 kB 00:00 2026-03-09T13:42:00.880 INFO:teuthology.orchestra.run.vm03.stdout:(104/140): python3-bcrypt-3.2.2-1.el9.x86_64.rp 18 MB/s | 43 kB 00:00 2026-03-09T13:42:00.882 INFO:teuthology.orchestra.run.vm03.stdout:(105/140): python3-cachetools-4.2.4-1.el9.noarc 15 MB/s | 32 kB 00:00 2026-03-09T13:42:00.885 INFO:teuthology.orchestra.run.vm03.stdout:(106/140): python3-certifi-2023.05.07-4.el9.noa 7.0 MB/s | 14 kB 00:00 2026-03-09T13:42:00.888 INFO:teuthology.orchestra.run.vm03.stdout:(107/140): python3-cheroot-10.0.1-4.el9.noarch. 47 MB/s | 173 kB 00:00 2026-03-09T13:42:00.894 INFO:teuthology.orchestra.run.vm03.stdout:(108/140): python3-cherrypy-18.6.1-2.el9.noarch 61 MB/s | 358 kB 00:00 2026-03-09T13:42:00.899 INFO:teuthology.orchestra.run.vm03.stdout:(109/140): python3-google-auth-2.45.0-1.el9.noa 54 MB/s | 254 kB 00:00 2026-03-09T13:42:00.925 INFO:teuthology.orchestra.run.vm03.stdout:(110/140): python3-grpcio-1.46.7-10.el9.x86_64. 78 MB/s | 2.0 MB 00:00 2026-03-09T13:42:00.926 INFO:teuthology.orchestra.run.vm03.stdout:(111/140): lua-devel-5.4.4-4.el9.x86_64.rpm 40 kB/s | 22 kB 00:00 2026-03-09T13:42:00.929 INFO:teuthology.orchestra.run.vm03.stdout:(112/140): python3-grpcio-tools-1.46.7-10.el9.x 41 MB/s | 144 kB 00:00 2026-03-09T13:42:00.930 INFO:teuthology.orchestra.run.vm03.stdout:(113/140): python3-jaraco-8.2.1-3.el9.noarch.rp 2.8 MB/s | 11 kB 00:00 2026-03-09T13:42:00.931 INFO:teuthology.orchestra.run.vm03.stdout:(114/140): python3-jaraco-classes-3.2.1-5.el9.n 8.9 MB/s | 18 kB 00:00 2026-03-09T13:42:00.932 INFO:teuthology.orchestra.run.vm03.stdout:(115/140): python3-jaraco-collections-3.0.0-8.e 11 MB/s | 23 kB 00:00 2026-03-09T13:42:00.933 INFO:teuthology.orchestra.run.vm03.stdout:(116/140): python3-jaraco-context-6.0.1-3.el9.n 11 MB/s | 20 kB 00:00 2026-03-09T13:42:00.934 INFO:teuthology.orchestra.run.vm03.stdout:(117/140): python3-jaraco-functools-3.5.0-2.el9 10 MB/s | 19 kB 00:00 2026-03-09T13:42:00.935 INFO:teuthology.orchestra.run.vm03.stdout:(118/140): python3-jaraco-text-4.0.0-2.el9.noar 13 MB/s | 26 kB 00:00 2026-03-09T13:42:00.938 INFO:teuthology.orchestra.run.vm03.stdout:(119/140): python3-logutils-0.3.5-21.el9.noarch 15 MB/s | 46 kB 00:00 2026-03-09T13:42:00.942 INFO:teuthology.orchestra.run.vm03.stdout:(120/140): python3-more-itertools-8.12.0-2.el9. 21 MB/s | 79 kB 00:00 2026-03-09T13:42:00.947 INFO:teuthology.orchestra.run.vm03.stdout:(121/140): python3-natsort-7.1.1-5.el9.noarch.r 13 MB/s | 58 kB 00:00 2026-03-09T13:42:00.952 INFO:teuthology.orchestra.run.vm03.stdout:(122/140): python3-kubernetes-26.1.0-3.el9.noar 57 MB/s | 1.0 MB 00:00 2026-03-09T13:42:00.954 INFO:teuthology.orchestra.run.vm03.stdout:(123/140): python3-pecan-1.4.2-3.el9.noarch.rpm 37 MB/s | 272 kB 00:00 2026-03-09T13:42:00.955 INFO:teuthology.orchestra.run.vm03.stdout:(124/140): python3-portend-3.1.0-2.el9.noarch.r 8.3 MB/s | 16 kB 00:00 2026-03-09T13:42:00.957 INFO:teuthology.orchestra.run.vm03.stdout:(125/140): python3-pyOpenSSL-21.0.0-1.el9.noarc 27 MB/s | 90 kB 00:00 2026-03-09T13:42:00.958 INFO:teuthology.orchestra.run.vm03.stdout:(126/140): python3-repoze-lru-0.7-16.el9.noarch 9.5 MB/s | 31 kB 00:00 2026-03-09T13:42:00.961 INFO:teuthology.orchestra.run.vm03.stdout:(127/140): python3-routes-2.5.1-5.el9.noarch.rp 47 MB/s | 188 kB 00:00 2026-03-09T13:42:00.962 INFO:teuthology.orchestra.run.vm03.stdout:(128/140): python3-rsa-4.9-2.el9.noarch.rpm 14 MB/s | 59 kB 00:00 2026-03-09T13:42:00.964 INFO:teuthology.orchestra.run.vm03.stdout:(129/140): python3-tempora-5.0.0-2.el9.noarch.r 13 MB/s | 36 kB 00:00 2026-03-09T13:42:00.965 INFO:teuthology.orchestra.run.vm03.stdout:(130/140): python3-typing-extensions-4.15.0-1.e 32 MB/s | 86 kB 00:00 2026-03-09T13:42:00.969 INFO:teuthology.orchestra.run.vm03.stdout:(131/140): python3-webob-1.8.8-2.el9.noarch.rpm 52 MB/s | 230 kB 00:00 2026-03-09T13:42:00.969 INFO:teuthology.orchestra.run.vm03.stdout:(132/140): python3-websocket-client-1.2.3-2.el9 20 MB/s | 90 kB 00:00 2026-03-09T13:42:00.972 INFO:teuthology.orchestra.run.vm03.stdout:(133/140): python3-xmltodict-0.12.0-15.el9.noar 11 MB/s | 22 kB 00:00 2026-03-09T13:42:00.974 INFO:teuthology.orchestra.run.vm03.stdout:(134/140): python3-zc-lockfile-2.0-10.el9.noarc 8.3 MB/s | 20 kB 00:00 2026-03-09T13:42:00.979 INFO:teuthology.orchestra.run.vm03.stdout:(135/140): re2-20211101-20.el9.x86_64.rpm 42 MB/s | 191 kB 00:00 2026-03-09T13:42:00.983 INFO:teuthology.orchestra.run.vm03.stdout:(136/140): python3-werkzeug-2.0.3-3.el9.1.noarc 30 MB/s | 427 kB 00:00 2026-03-09T13:42:00.999 INFO:teuthology.orchestra.run.vm03.stdout:(137/140): thrift-0.15.0-4.el9.x86_64.rpm 80 MB/s | 1.6 MB 00:00 2026-03-09T13:42:01.085 INFO:teuthology.orchestra.run.vm03.stdout:(138/140): protobuf-compiler-3.14.0-17.el9.x86_ 1.3 MB/s | 862 kB 00:00 2026-03-09T13:42:02.016 INFO:teuthology.orchestra.run.vm03.stdout:(139/140): librbd1-19.2.3-678.ge911bdeb.el9.x86 3.1 MB/s | 3.2 MB 00:01 2026-03-09T13:42:02.073 INFO:teuthology.orchestra.run.vm03.stdout:(140/140): librados2-19.2.3-678.ge911bdeb.el9.x 3.1 MB/s | 3.4 MB 00:01 2026-03-09T13:42:02.078 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-09T13:42:02.078 INFO:teuthology.orchestra.run.vm03.stdout:Total 14 MB/s | 211 MB 00:14 2026-03-09T13:42:02.747 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-09T13:42:02.801 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-09T13:42:02.801 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-09T13:42:03.651 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-09T13:42:03.651 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-09T13:42:04.573 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-09T13:42:04.602 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/142 2026-03-09T13:42:04.615 INFO:teuthology.orchestra.run.vm03.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/142 2026-03-09T13:42:04.782 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/142 2026-03-09T13:42:04.784 INFO:teuthology.orchestra.run.vm03.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T13:42:04.845 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T13:42:04.847 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-09T13:42:04.876 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-09T13:42:04.885 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-09T13:42:04.889 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/142 2026-03-09T13:42:04.891 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/142 2026-03-09T13:42:04.905 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/142 2026-03-09T13:42:04.912 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-packaging-20.9-5.el9.noarch 10/142 2026-03-09T13:42:04.922 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/142 2026-03-09T13:42:04.923 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T13:42:04.958 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T13:42:04.960 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-09T13:42:04.975 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-09T13:42:05.008 INFO:teuthology.orchestra.run.vm03.stdout: Installing : re2-1:20211101-20.el9.x86_64 14/142 2026-03-09T13:42:05.045 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 15/142 2026-03-09T13:42:05.050 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 16/142 2026-03-09T13:42:05.056 INFO:teuthology.orchestra.run.vm03.stdout: Installing : liboath-2.6.12-1.el9.x86_64 17/142 2026-03-09T13:42:05.061 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 18/142 2026-03-09T13:42:05.088 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 19/142 2026-03-09T13:42:05.097 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 20/142 2026-03-09T13:42:05.107 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 21/142 2026-03-09T13:42:05.114 INFO:teuthology.orchestra.run.vm03.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 22/142 2026-03-09T13:42:05.118 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lua-5.4.4-4.el9.x86_64 23/142 2026-03-09T13:42:05.124 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 24/142 2026-03-09T13:42:05.153 INFO:teuthology.orchestra.run.vm03.stdout: Installing : unzip-6.0-59.el9.x86_64 25/142 2026-03-09T13:42:05.169 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 26/142 2026-03-09T13:42:05.173 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 27/142 2026-03-09T13:42:05.180 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 28/142 2026-03-09T13:42:05.183 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 29/142 2026-03-09T13:42:05.213 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 30/142 2026-03-09T13:42:05.221 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 31/142 2026-03-09T13:42:05.231 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-09T13:42:05.245 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 33/142 2026-03-09T13:42:05.254 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 34/142 2026-03-09T13:42:05.283 INFO:teuthology.orchestra.run.vm03.stdout: Installing : zip-3.0-35.el9.x86_64 35/142 2026-03-09T13:42:05.288 INFO:teuthology.orchestra.run.vm03.stdout: Installing : luarocks-3.9.2-5.el9.noarch 36/142 2026-03-09T13:42:05.297 INFO:teuthology.orchestra.run.vm03.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 37/142 2026-03-09T13:42:05.328 INFO:teuthology.orchestra.run.vm03.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 38/142 2026-03-09T13:42:05.388 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 39/142 2026-03-09T13:42:05.413 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 40/142 2026-03-09T13:42:05.424 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rsa-4.9-2.el9.noarch 41/142 2026-03-09T13:42:05.428 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 42/142 2026-03-09T13:42:05.435 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 43/142 2026-03-09T13:42:05.444 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 44/142 2026-03-09T13:42:05.451 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 45/142 2026-03-09T13:42:05.455 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 46/142 2026-03-09T13:42:05.472 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 47/142 2026-03-09T13:42:05.498 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 48/142 2026-03-09T13:42:05.506 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 49/142 2026-03-09T13:42:05.514 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 50/142 2026-03-09T13:42:05.528 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 51/142 2026-03-09T13:42:05.540 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 52/142 2026-03-09T13:42:05.554 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 53/142 2026-03-09T13:42:05.621 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 54/142 2026-03-09T13:42:05.629 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 55/142 2026-03-09T13:42:05.641 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 56/142 2026-03-09T13:42:05.692 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 57/142 2026-03-09T13:42:06.085 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 58/142 2026-03-09T13:42:06.104 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 59/142 2026-03-09T13:42:06.110 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 60/142 2026-03-09T13:42:06.119 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 61/142 2026-03-09T13:42:06.130 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 62/142 2026-03-09T13:42:06.137 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 63/142 2026-03-09T13:42:06.142 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 64/142 2026-03-09T13:42:06.151 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 65/142 2026-03-09T13:42:06.155 INFO:teuthology.orchestra.run.vm03.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 66/142 2026-03-09T13:42:06.157 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 67/142 2026-03-09T13:42:06.190 INFO:teuthology.orchestra.run.vm03.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 68/142 2026-03-09T13:42:06.269 INFO:teuthology.orchestra.run.vm03.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 69/142 2026-03-09T13:42:06.283 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 70/142 2026-03-09T13:42:06.345 INFO:teuthology.orchestra.run.vm03.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 71/142 2026-03-09T13:42:06.388 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-py-1.10.0-6.el9.noarch 72/142 2026-03-09T13:42:06.402 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 73/142 2026-03-09T13:42:06.411 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 74/142 2026-03-09T13:42:06.418 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pluggy-0.13.1-7.el9.noarch 75/142 2026-03-09T13:42:06.466 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-iniconfig-1.1.1-7.el9.noarch 76/142 2026-03-09T13:42:06.746 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 77/142 2026-03-09T13:42:06.778 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 78/142 2026-03-09T13:42:06.785 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 79/142 2026-03-09T13:42:06.846 INFO:teuthology.orchestra.run.vm03.stdout: Installing : openblas-0.3.29-1.el9.x86_64 80/142 2026-03-09T13:42:06.850 INFO:teuthology.orchestra.run.vm03.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 81/142 2026-03-09T13:42:06.874 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 82/142 2026-03-09T13:42:07.274 INFO:teuthology.orchestra.run.vm03.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 83/142 2026-03-09T13:42:07.366 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 84/142 2026-03-09T13:42:08.189 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 85/142 2026-03-09T13:42:08.222 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 86/142 2026-03-09T13:42:08.229 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 87/142 2026-03-09T13:42:08.235 INFO:teuthology.orchestra.run.vm03.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 88/142 2026-03-09T13:42:08.395 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 89/142 2026-03-09T13:42:08.398 INFO:teuthology.orchestra.run.vm03.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-09T13:42:08.432 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-09T13:42:08.436 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 91/142 2026-03-09T13:42:08.444 INFO:teuthology.orchestra.run.vm03.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 92/142 2026-03-09T13:42:08.710 INFO:teuthology.orchestra.run.vm03.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 93/142 2026-03-09T13:42:08.712 INFO:teuthology.orchestra.run.vm03.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-09T13:42:08.735 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-09T13:42:08.738 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 95/142 2026-03-09T13:42:09.901 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T13:42:09.907 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T13:42:09.930 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T13:42:09.948 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-ply-3.11-14.el9.noarch 97/142 2026-03-09T13:42:09.970 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 98/142 2026-03-09T13:42:10.084 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 99/142 2026-03-09T13:42:10.138 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 100/142 2026-03-09T13:42:10.169 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 101/142 2026-03-09T13:42:10.207 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 102/142 2026-03-09T13:42:10.272 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 103/142 2026-03-09T13:42:10.283 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 104/142 2026-03-09T13:42:10.289 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 105/142 2026-03-09T13:42:10.296 INFO:teuthology.orchestra.run.vm03.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 106/142 2026-03-09T13:42:10.301 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 107/142 2026-03-09T13:42:10.302 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-09T13:42:10.323 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-09T13:42:10.645 INFO:teuthology.orchestra.run.vm03.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 109/142 2026-03-09T13:42:10.654 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-09T13:42:10.696 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-09T13:42:10.696 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-09T13:42:10.696 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-09T13:42:10.696 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:10.707 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-09T13:42:17.486 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-09T13:42:17.486 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /sys 2026-03-09T13:42:17.486 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /proc 2026-03-09T13:42:17.486 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /mnt 2026-03-09T13:42:17.486 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /var/tmp 2026-03-09T13:42:17.486 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /home 2026-03-09T13:42:17.486 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /root 2026-03-09T13:42:17.486 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /tmp 2026-03-09T13:42:17.486 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:17.617 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-09T13:42:17.643 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-09T13:42:17.643 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:42:17.643 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T13:42:17.643 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T13:42:17.643 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T13:42:17.643 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:17.898 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-09T13:42:17.922 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-09T13:42:17.922 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:42:17.922 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T13:42:17.922 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T13:42:17.922 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T13:42:17.923 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:17.932 INFO:teuthology.orchestra.run.vm03.stdout: Installing : mailcap-2.1.49-5.el9.noarch 114/142 2026-03-09T13:42:17.979 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 115/142 2026-03-09T13:42:18.002 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T13:42:18.002 INFO:teuthology.orchestra.run.vm03.stdout:Creating group 'qat' with GID 994. 2026-03-09T13:42:18.002 INFO:teuthology.orchestra.run.vm03.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-09T13:42:18.002 INFO:teuthology.orchestra.run.vm03.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-09T13:42:18.002 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:18.015 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T13:42:18.052 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T13:42:18.052 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-09T13:42:18.052 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:18.102 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 117/142 2026-03-09T13:42:18.187 INFO:teuthology.orchestra.run.vm03.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 118/142 2026-03-09T13:42:18.192 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-09T13:42:18.207 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-09T13:42:18.207 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:42:18.207 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T13:42:18.208 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:19.114 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-09T13:42:19.143 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-09T13:42:19.143 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:42:19.143 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T13:42:19.143 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T13:42:19.143 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T13:42:19.144 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:19.223 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-09T13:42:19.226 INFO:teuthology.orchestra.run.vm03.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-09T13:42:19.233 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 122/142 2026-03-09T13:42:19.258 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 123/142 2026-03-09T13:42:19.355 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-09T13:42:19.917 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-09T13:42:19.923 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-09T13:42:20.464 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-09T13:42:20.466 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-09T13:42:20.533 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-09T13:42:20.591 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 127/142 2026-03-09T13:42:20.593 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-09T13:42:20.616 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-09T13:42:20.616 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:42:20.616 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T13:42:20.617 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T13:42:20.617 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T13:42:20.617 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:20.631 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-09T13:42:20.643 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-09T13:42:21.160 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 130/142 2026-03-09T13:42:21.163 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-09T13:42:21.187 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-09T13:42:21.187 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:42:21.187 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T13:42:21.187 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T13:42:21.187 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T13:42:21.187 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:21.198 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-09T13:42:21.221 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-09T13:42:21.221 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:42:21.221 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T13:42:21.221 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:21.378 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-09T13:42:21.400 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-09T13:42:21.400 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:42:21.400 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T13:42:21.401 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T13:42:21.401 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T13:42:21.401 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:24.038 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 134/142 2026-03-09T13:42:24.049 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/142 2026-03-09T13:42:24.102 INFO:teuthology.orchestra.run.vm03.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 136/142 2026-03-09T13:42:24.111 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pytest-6.2.2-7.el9.noarch 137/142 2026-03-09T13:42:24.171 INFO:teuthology.orchestra.run.vm03.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 138/142 2026-03-09T13:42:24.182 INFO:teuthology.orchestra.run.vm03.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-09T13:42:24.186 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 140/142 2026-03-09T13:42:24.186 INFO:teuthology.orchestra.run.vm03.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-09T13:42:24.203 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-09T13:42:24.204 INFO:teuthology.orchestra.run.vm03.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T13:42:25.635 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T13:42:25.635 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/142 2026-03-09T13:42:25.635 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/142 2026-03-09T13:42:25.635 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/142 2026-03-09T13:42:25.635 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/142 2026-03-09T13:42:25.636 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : zip-3.0-35.el9.x86_64 51/142 2026-03-09T13:42:25.637 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/142 2026-03-09T13:42:25.638 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-iniconfig-1.1.1-7.el9.noarch 69/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 70/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 71/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 72/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 73/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 74/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 75/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 76/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 77/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pluggy-0.13.1-7.el9.noarch 78/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 79/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-py-1.10.0-6.el9.noarch 80/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 81/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 82/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pytest-6.2.2-7.el9.noarch 83/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 84/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 85/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 86/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 87/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 88/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 89/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 90/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 91/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 92/142 2026-03-09T13:42:25.639 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 93/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 94/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 95/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 96/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 97/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 98/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 99/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 100/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 101/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 102/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 103/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 104/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 105/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 106/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 107/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 108/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 109/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 110/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 111/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 112/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 113/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 114/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 115/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 116/142 2026-03-09T13:42:25.640 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 117/142 2026-03-09T13:42:25.641 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 118/142 2026-03-09T13:42:25.641 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 119/142 2026-03-09T13:42:25.641 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 120/142 2026-03-09T13:42:25.641 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 121/142 2026-03-09T13:42:25.641 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/142 2026-03-09T13:42:25.641 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/142 2026-03-09T13:42:25.641 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 124/142 2026-03-09T13:42:25.641 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 125/142 2026-03-09T13:42:25.641 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 126/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 127/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 128/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 129/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 130/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 131/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 132/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 133/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 134/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 135/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 136/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : re2-1:20211101-20.el9.x86_64 137/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 138/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 140/142 2026-03-09T13:42:25.642 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 141/142 2026-03-09T13:42:25.779 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T13:42:25.779 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout:Upgraded: 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout:Installed: 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-09T13:42:25.780 INFO:teuthology.orchestra.run.vm03.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-iniconfig-1.1.1-7.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pluggy-0.13.1-7.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-py-1.10.0-6.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T13:42:25.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytest-6.2.2-7.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: zip-3.0-35.el9.x86_64 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:42:25.782 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T13:42:25.917 DEBUG:teuthology.parallel:result is None 2026-03-09T13:47:02.703 INFO:teuthology.orchestra.run.vm04.stdout:[MIRROR] protobuf-compiler-3.14.0-17.el9.x86_64.rpm: Curl error (28): Timeout was reached for http://ftp.nsc.ru/pub/centos-9/9-stream/CRB/x86_64/os/Packages/protobuf-compiler-3.14.0-17.el9.x86_64.rpm [Operation too slow. Less than 1000 bytes/sec transferred the last 300 seconds] 2026-03-09T13:52:09.710 INFO:teuthology.orchestra.run.vm04.stdout:[MIRROR] protobuf-compiler-3.14.0-17.el9.x86_64.rpm: Curl error (28): Timeout was reached for http://ftp.nsc.ru/pub/centos-9/9-stream/CRB/x86_64/os/Packages/protobuf-compiler-3.14.0-17.el9.x86_64.rpm [Operation too slow. Less than 1000 bytes/sec transferred the last 300 seconds] 2026-03-09T13:57:16.716 INFO:teuthology.orchestra.run.vm04.stdout:[MIRROR] protobuf-compiler-3.14.0-17.el9.x86_64.rpm: Curl error (28): Timeout was reached for https://ftp.nsc.ru/pub/centos-9/9-stream/CRB/x86_64/os/Packages/protobuf-compiler-3.14.0-17.el9.x86_64.rpm [Operation too slow. Less than 1000 bytes/sec transferred the last 300 seconds] 2026-03-09T13:57:17.215 INFO:teuthology.orchestra.run.vm04.stdout:(140/140): protobuf-compiler-3.14.0-17.el9.x86_ 958 B/s | 862 kB 15:21 2026-03-09T13:57:17.216 INFO:teuthology.orchestra.run.vm04.stdout:-------------------------------------------------------------------------------- 2026-03-09T13:57:17.216 INFO:teuthology.orchestra.run.vm04.stdout:Total 232 kB/s | 211 MB 15:33 2026-03-09T13:57:17.707 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T13:57:17.756 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T13:57:17.756 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T13:57:18.591 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T13:57:18.591 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T13:57:19.509 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T13:57:19.522 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/142 2026-03-09T13:57:19.534 INFO:teuthology.orchestra.run.vm04.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/142 2026-03-09T13:57:19.702 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/142 2026-03-09T13:57:19.705 INFO:teuthology.orchestra.run.vm04.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T13:57:19.765 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T13:57:19.766 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-09T13:57:19.796 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/142 2026-03-09T13:57:19.805 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-09T13:57:19.808 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/142 2026-03-09T13:57:19.811 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/142 2026-03-09T13:57:19.823 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/142 2026-03-09T13:57:19.829 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-packaging-20.9-5.el9.noarch 10/142 2026-03-09T13:57:19.840 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/142 2026-03-09T13:57:19.841 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T13:57:19.878 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T13:57:19.879 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-09T13:57:19.893 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 13/142 2026-03-09T13:57:19.932 INFO:teuthology.orchestra.run.vm04.stdout: Installing : re2-1:20211101-20.el9.x86_64 14/142 2026-03-09T13:57:19.969 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 15/142 2026-03-09T13:57:19.975 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 16/142 2026-03-09T13:57:19.981 INFO:teuthology.orchestra.run.vm04.stdout: Installing : liboath-2.6.12-1.el9.x86_64 17/142 2026-03-09T13:57:19.986 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 18/142 2026-03-09T13:57:20.016 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 19/142 2026-03-09T13:57:20.027 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 20/142 2026-03-09T13:57:20.038 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 21/142 2026-03-09T13:57:20.045 INFO:teuthology.orchestra.run.vm04.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 22/142 2026-03-09T13:57:20.049 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lua-5.4.4-4.el9.x86_64 23/142 2026-03-09T13:57:20.055 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 24/142 2026-03-09T13:57:20.084 INFO:teuthology.orchestra.run.vm04.stdout: Installing : unzip-6.0-59.el9.x86_64 25/142 2026-03-09T13:57:20.101 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 26/142 2026-03-09T13:57:20.105 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 27/142 2026-03-09T13:57:20.113 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 28/142 2026-03-09T13:57:20.115 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 29/142 2026-03-09T13:57:20.147 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 30/142 2026-03-09T13:57:20.153 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 31/142 2026-03-09T13:57:20.164 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-09T13:57:20.179 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 33/142 2026-03-09T13:57:20.187 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 34/142 2026-03-09T13:57:20.217 INFO:teuthology.orchestra.run.vm04.stdout: Installing : zip-3.0-35.el9.x86_64 35/142 2026-03-09T13:57:20.223 INFO:teuthology.orchestra.run.vm04.stdout: Installing : luarocks-3.9.2-5.el9.noarch 36/142 2026-03-09T13:57:20.231 INFO:teuthology.orchestra.run.vm04.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 37/142 2026-03-09T13:57:20.262 INFO:teuthology.orchestra.run.vm04.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 38/142 2026-03-09T13:57:20.326 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 39/142 2026-03-09T13:57:20.343 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 40/142 2026-03-09T13:57:20.353 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rsa-4.9-2.el9.noarch 41/142 2026-03-09T13:57:20.359 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 42/142 2026-03-09T13:57:20.365 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 43/142 2026-03-09T13:57:20.377 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 44/142 2026-03-09T13:57:20.384 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 45/142 2026-03-09T13:57:20.396 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 46/142 2026-03-09T13:57:20.414 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 47/142 2026-03-09T13:57:20.441 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 48/142 2026-03-09T13:57:20.448 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 49/142 2026-03-09T13:57:20.456 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 50/142 2026-03-09T13:57:20.470 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 51/142 2026-03-09T13:57:20.487 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 52/142 2026-03-09T13:57:20.499 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 53/142 2026-03-09T13:57:20.566 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 54/142 2026-03-09T13:57:20.575 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 55/142 2026-03-09T13:57:20.586 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 56/142 2026-03-09T13:57:20.638 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 57/142 2026-03-09T13:57:21.018 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 58/142 2026-03-09T13:57:21.034 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 59/142 2026-03-09T13:57:21.040 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 60/142 2026-03-09T13:57:21.048 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 61/142 2026-03-09T13:57:21.057 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 62/142 2026-03-09T13:57:21.064 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 63/142 2026-03-09T13:57:21.069 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 64/142 2026-03-09T13:57:21.081 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 65/142 2026-03-09T13:57:21.086 INFO:teuthology.orchestra.run.vm04.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 66/142 2026-03-09T13:57:21.089 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 67/142 2026-03-09T13:57:21.129 INFO:teuthology.orchestra.run.vm04.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 68/142 2026-03-09T13:57:21.184 INFO:teuthology.orchestra.run.vm04.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 69/142 2026-03-09T13:57:21.198 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 70/142 2026-03-09T13:57:21.258 INFO:teuthology.orchestra.run.vm04.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 71/142 2026-03-09T13:57:21.295 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-py-1.10.0-6.el9.noarch 72/142 2026-03-09T13:57:21.308 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 73/142 2026-03-09T13:57:21.318 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 74/142 2026-03-09T13:57:21.324 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pluggy-0.13.1-7.el9.noarch 75/142 2026-03-09T13:57:21.365 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-iniconfig-1.1.1-7.el9.noarch 76/142 2026-03-09T13:57:21.637 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 77/142 2026-03-09T13:57:21.671 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 78/142 2026-03-09T13:57:21.678 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 79/142 2026-03-09T13:57:21.740 INFO:teuthology.orchestra.run.vm04.stdout: Installing : openblas-0.3.29-1.el9.x86_64 80/142 2026-03-09T13:57:21.744 INFO:teuthology.orchestra.run.vm04.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 81/142 2026-03-09T13:57:21.770 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 82/142 2026-03-09T13:57:22.162 INFO:teuthology.orchestra.run.vm04.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 83/142 2026-03-09T13:57:22.252 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 84/142 2026-03-09T13:57:23.071 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 85/142 2026-03-09T13:57:23.100 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 86/142 2026-03-09T13:57:23.107 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 87/142 2026-03-09T13:57:23.112 INFO:teuthology.orchestra.run.vm04.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 88/142 2026-03-09T13:57:23.269 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 89/142 2026-03-09T13:57:23.272 INFO:teuthology.orchestra.run.vm04.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-09T13:57:23.303 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 90/142 2026-03-09T13:57:23.306 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 91/142 2026-03-09T13:57:23.314 INFO:teuthology.orchestra.run.vm04.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 92/142 2026-03-09T13:57:23.569 INFO:teuthology.orchestra.run.vm04.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 93/142 2026-03-09T13:57:23.571 INFO:teuthology.orchestra.run.vm04.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-09T13:57:23.589 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 94/142 2026-03-09T13:57:23.591 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 95/142 2026-03-09T13:57:24.691 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T13:57:24.696 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T13:57:24.715 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 96/142 2026-03-09T13:57:24.733 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-ply-3.11-14.el9.noarch 97/142 2026-03-09T13:57:24.754 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 98/142 2026-03-09T13:57:24.842 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 99/142 2026-03-09T13:57:24.856 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 100/142 2026-03-09T13:57:24.884 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 101/142 2026-03-09T13:57:24.922 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 102/142 2026-03-09T13:57:24.985 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 103/142 2026-03-09T13:57:24.994 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 104/142 2026-03-09T13:57:25.000 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 105/142 2026-03-09T13:57:25.005 INFO:teuthology.orchestra.run.vm04.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 106/142 2026-03-09T13:57:25.010 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 107/142 2026-03-09T13:57:25.012 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-09T13:57:25.028 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 108/142 2026-03-09T13:57:25.328 INFO:teuthology.orchestra.run.vm04.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 109/142 2026-03-09T13:57:25.334 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-09T13:57:25.381 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 110/142 2026-03-09T13:57:25.382 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-09T13:57:25.382 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-09T13:57:25.382 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:25.386 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-09T13:57:31.733 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 111/142 2026-03-09T13:57:31.733 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /sys 2026-03-09T13:57:31.733 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /proc 2026-03-09T13:57:31.733 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /mnt 2026-03-09T13:57:31.733 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /var/tmp 2026-03-09T13:57:31.733 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /home 2026-03-09T13:57:31.733 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /root 2026-03-09T13:57:31.733 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /tmp 2026-03-09T13:57:31.733 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:31.852 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-09T13:57:31.877 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 112/142 2026-03-09T13:57:31.878 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:57:31.878 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T13:57:31.878 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T13:57:31.878 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-09T13:57:31.878 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:32.117 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-09T13:57:32.139 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 113/142 2026-03-09T13:57:32.139 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:57:32.139 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T13:57:32.139 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T13:57:32.139 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-09T13:57:32.139 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:32.147 INFO:teuthology.orchestra.run.vm04.stdout: Installing : mailcap-2.1.49-5.el9.noarch 114/142 2026-03-09T13:57:32.150 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 115/142 2026-03-09T13:57:32.168 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T13:57:32.168 INFO:teuthology.orchestra.run.vm04.stdout:Creating group 'qat' with GID 994. 2026-03-09T13:57:32.168 INFO:teuthology.orchestra.run.vm04.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-09T13:57:32.168 INFO:teuthology.orchestra.run.vm04.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-09T13:57:32.168 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:32.178 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T13:57:32.202 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 116/142 2026-03-09T13:57:32.202 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-09T13:57:32.202 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:32.243 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 117/142 2026-03-09T13:57:32.316 INFO:teuthology.orchestra.run.vm04.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 118/142 2026-03-09T13:57:32.321 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-09T13:57:32.336 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 119/142 2026-03-09T13:57:32.336 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:57:32.336 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T13:57:32.336 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:33.152 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-09T13:57:33.179 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 120/142 2026-03-09T13:57:33.179 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:57:33.179 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T13:57:33.179 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T13:57:33.179 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-09T13:57:33.179 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:33.245 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-09T13:57:33.249 INFO:teuthology.orchestra.run.vm04.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 121/142 2026-03-09T13:57:33.255 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 122/142 2026-03-09T13:57:33.279 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 123/142 2026-03-09T13:57:33.282 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-09T13:57:33.826 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 124/142 2026-03-09T13:57:33.832 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-09T13:57:34.396 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 125/142 2026-03-09T13:57:34.398 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-09T13:57:34.462 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 126/142 2026-03-09T13:57:34.526 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 127/142 2026-03-09T13:57:34.528 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-09T13:57:34.552 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 128/142 2026-03-09T13:57:34.552 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:57:34.552 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T13:57:34.552 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T13:57:34.552 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-09T13:57:34.552 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:34.566 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-09T13:57:34.576 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 129/142 2026-03-09T13:57:35.160 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 130/142 2026-03-09T13:57:35.163 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-09T13:57:35.186 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 131/142 2026-03-09T13:57:35.186 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:57:35.186 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T13:57:35.186 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T13:57:35.186 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-09T13:57:35.186 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:35.198 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-09T13:57:35.219 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 132/142 2026-03-09T13:57:35.219 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:57:35.219 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T13:57:35.219 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:35.376 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-09T13:57:35.395 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 133/142 2026-03-09T13:57:35.396 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T13:57:35.396 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T13:57:35.396 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T13:57:35.396 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-09T13:57:35.396 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:38.293 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 134/142 2026-03-09T13:57:38.307 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/142 2026-03-09T13:57:38.363 INFO:teuthology.orchestra.run.vm04.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 136/142 2026-03-09T13:57:38.374 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-pytest-6.2.2-7.el9.noarch 137/142 2026-03-09T13:57:38.435 INFO:teuthology.orchestra.run.vm04.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 138/142 2026-03-09T13:57:38.445 INFO:teuthology.orchestra.run.vm04.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-09T13:57:38.449 INFO:teuthology.orchestra.run.vm04.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 140/142 2026-03-09T13:57:38.449 INFO:teuthology.orchestra.run.vm04.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-09T13:57:38.463 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 141/142 2026-03-09T13:57:38.463 INFO:teuthology.orchestra.run.vm04.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/142 2026-03-09T13:57:39.799 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/142 2026-03-09T13:57:39.800 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/142 2026-03-09T13:57:39.801 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/142 2026-03-09T13:57:39.801 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/142 2026-03-09T13:57:39.801 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/142 2026-03-09T13:57:39.801 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/142 2026-03-09T13:57:39.801 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/142 2026-03-09T13:57:39.801 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/142 2026-03-09T13:57:39.801 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/142 2026-03-09T13:57:39.801 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/142 2026-03-09T13:57:39.801 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/142 2026-03-09T13:57:39.801 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : zip-3.0-35.el9.x86_64 51/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-iniconfig-1.1.1-7.el9.noarch 69/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 70/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 71/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 72/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 73/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 74/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 75/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 76/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 77/142 2026-03-09T13:57:39.803 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pluggy-0.13.1-7.el9.noarch 78/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 79/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-py-1.10.0-6.el9.noarch 80/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 81/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 82/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pytest-6.2.2-7.el9.noarch 83/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 84/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 85/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 86/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 87/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 88/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 89/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 90/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 91/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 92/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 93/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 94/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 95/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 96/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 97/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 98/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 99/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 100/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 101/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 102/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 103/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 104/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 105/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 106/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 107/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 108/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 109/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 110/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 111/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 112/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 113/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 114/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 115/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 116/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 117/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 118/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 119/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 120/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 121/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 124/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 125/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 126/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 127/142 2026-03-09T13:57:39.804 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 128/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 129/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 130/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 131/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 132/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 133/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 134/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 135/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 136/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : re2-1:20211101-20.el9.x86_64 137/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 138/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 139/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 140/142 2026-03-09T13:57:39.805 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 141/142 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 142/142 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout:Upgraded: 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout:Installed: 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T13:57:39.917 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T13:57:39.918 INFO:teuthology.orchestra.run.vm04.stdout: python3-iniconfig-1.1.1-7.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-pluggy-0.13.1-7.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-py-1.10.0-6.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytest-6.2.2-7.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T13:57:39.919 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T13:57:39.920 INFO:teuthology.orchestra.run.vm04.stdout: zip-3.0-35.el9.x86_64 2026-03-09T13:57:39.920 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:57:39.920 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T13:57:40.024 DEBUG:teuthology.parallel:result is None 2026-03-09T13:57:40.024 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:57:40.673 DEBUG:teuthology.orchestra.run.vm03:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-09T13:57:40.698 INFO:teuthology.orchestra.run.vm03.stdout:19.2.3-678.ge911bdeb.el9 2026-03-09T13:57:40.698 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-09T13:57:40.698 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-09T13:57:40.699 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:57:41.343 DEBUG:teuthology.orchestra.run.vm04:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-09T13:57:41.364 INFO:teuthology.orchestra.run.vm04.stdout:19.2.3-678.ge911bdeb.el9 2026-03-09T13:57:41.364 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-09T13:57:41.364 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-09T13:57:41.365 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T13:57:41.365 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:57:41.365 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T13:57:41.393 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:57:41.393 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T13:57:41.434 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T13:57:41.434 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:57:41.434 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T13:57:41.460 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T13:57:41.528 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:57:41.528 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T13:57:41.557 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T13:57:41.626 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T13:57:41.627 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:57:41.627 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T13:57:41.659 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T13:57:41.733 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:57:41.733 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T13:57:41.758 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T13:57:41.822 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T13:57:41.823 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:57:41.823 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T13:57:41.858 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T13:57:41.935 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:57:41.936 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T13:57:41.964 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T13:57:42.032 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T13:57:42.113 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'global': {'mon election default strategy': 1, 'ms bind msgr1': False, 'ms bind msgr2': True, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'but it is still running', 'overall HEALTH_', '\\(OSDMAP_FLAGS\\)', '\\(PG_', '\\(OSD_', '\\(OBJECT_', '\\(POOL_APP_NOT_ENABLED\\)'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'cephadm-package'} 2026-03-09T13:57:42.113 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:57:42.114 INFO:tasks.cephadm:Cluster fsid is f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:57:42.114 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T13:57:42.114 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.103', 'mon.c': '[v2:192.168.123.103:3301,v1:192.168.123.103:6790]', 'mon.b': '192.168.123.104'} 2026-03-09T13:57:42.114 INFO:tasks.cephadm:First mon is mon.a on vm03 2026-03-09T13:57:42.114 INFO:tasks.cephadm:First mgr is y 2026-03-09T13:57:42.114 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T13:57:42.114 DEBUG:teuthology.orchestra.run.vm03:> sudo hostname $(hostname -s) 2026-03-09T13:57:42.143 DEBUG:teuthology.orchestra.run.vm04:> sudo hostname $(hostname -s) 2026-03-09T13:57:42.173 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-09T13:57:42.174 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T13:57:42.185 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T13:57:42.369 INFO:teuthology.orchestra.run.vm03.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T13:57:42.380 INFO:teuthology.orchestra.run.vm04.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T13:58:20.649 INFO:teuthology.orchestra.run.vm04.stdout:{ 2026-03-09T13:58:20.649 INFO:teuthology.orchestra.run.vm04.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T13:58:20.649 INFO:teuthology.orchestra.run.vm04.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T13:58:20.649 INFO:teuthology.orchestra.run.vm04.stdout: "repo_digests": [ 2026-03-09T13:58:20.649 INFO:teuthology.orchestra.run.vm04.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T13:58:20.649 INFO:teuthology.orchestra.run.vm04.stdout: ] 2026-03-09T13:58:20.649 INFO:teuthology.orchestra.run.vm04.stdout:} 2026-03-09T13:58:31.358 INFO:teuthology.orchestra.run.vm03.stdout:{ 2026-03-09T13:58:31.358 INFO:teuthology.orchestra.run.vm03.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T13:58:31.358 INFO:teuthology.orchestra.run.vm03.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T13:58:31.358 INFO:teuthology.orchestra.run.vm03.stdout: "repo_digests": [ 2026-03-09T13:58:31.358 INFO:teuthology.orchestra.run.vm03.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T13:58:31.358 INFO:teuthology.orchestra.run.vm03.stdout: ] 2026-03-09T13:58:31.358 INFO:teuthology.orchestra.run.vm03.stdout:} 2026-03-09T13:58:31.375 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph 2026-03-09T13:58:31.401 DEBUG:teuthology.orchestra.run.vm04:> sudo mkdir -p /etc/ceph 2026-03-09T13:58:31.430 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /etc/ceph 2026-03-09T13:58:31.464 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 777 /etc/ceph 2026-03-09T13:58:31.497 INFO:tasks.cephadm:Writing seed config... 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [global] ms bind msgr1 = False 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [global] ms bind msgr2 = True 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T13:58:31.497 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-09T13:58:31.498 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:58:31.498 DEBUG:teuthology.orchestra.run.vm03:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T13:58:31.520 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 mon election default strategy = 1 ms bind msgr1 = False ms bind msgr2 = True ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-09T13:58:31.520 DEBUG:teuthology.orchestra.run.vm03:mon.a> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.a.service 2026-03-09T13:58:31.563 DEBUG:teuthology.orchestra.run.vm03:mgr.y> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.y.service 2026-03-09T13:58:31.604 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T13:58:31.604 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.103 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T13:58:31.742 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-09T13:58:31.743 INFO:teuthology.orchestra.run.vm03.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.103', '--skip-admin-label'] 2026-03-09T13:58:31.743 INFO:teuthology.orchestra.run.vm03.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T13:58:31.743 INFO:teuthology.orchestra.run.vm03.stdout:Verifying podman|docker is present... 2026-03-09T13:58:31.764 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stdout 5.8.0 2026-03-09T13:58:31.764 INFO:teuthology.orchestra.run.vm03.stdout:Verifying lvm2 is present... 2026-03-09T13:58:31.764 INFO:teuthology.orchestra.run.vm03.stdout:Verifying time synchronization is in place... 2026-03-09T13:58:31.770 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T13:58:31.771 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T13:58:31.776 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T13:58:31.776 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T13:58:31.781 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-09T13:58:31.786 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-09T13:58:31.786 INFO:teuthology.orchestra.run.vm03.stdout:Unit chronyd.service is enabled and running 2026-03-09T13:58:31.786 INFO:teuthology.orchestra.run.vm03.stdout:Repeating the final host check... 2026-03-09T13:58:31.804 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stdout 5.8.0 2026-03-09T13:58:31.804 INFO:teuthology.orchestra.run.vm03.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-09T13:58:31.804 INFO:teuthology.orchestra.run.vm03.stdout:systemctl is present 2026-03-09T13:58:31.804 INFO:teuthology.orchestra.run.vm03.stdout:lvcreate is present 2026-03-09T13:58:31.811 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T13:58:31.811 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T13:58:31.817 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T13:58:31.817 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T13:58:31.823 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-09T13:58:31.829 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-09T13:58:31.829 INFO:teuthology.orchestra.run.vm03.stdout:Unit chronyd.service is enabled and running 2026-03-09T13:58:31.829 INFO:teuthology.orchestra.run.vm03.stdout:Host looks OK 2026-03-09T13:58:31.829 INFO:teuthology.orchestra.run.vm03.stdout:Cluster fsid: f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:31.829 INFO:teuthology.orchestra.run.vm03.stdout:Acquiring lock 140023564593904 on /run/cephadm/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4.lock 2026-03-09T13:58:31.829 INFO:teuthology.orchestra.run.vm03.stdout:Lock 140023564593904 acquired on /run/cephadm/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4.lock 2026-03-09T13:58:31.829 INFO:teuthology.orchestra.run.vm03.stdout:Verifying IP 192.168.123.103 port 3300 ... 2026-03-09T13:58:31.830 INFO:teuthology.orchestra.run.vm03.stdout:Verifying IP 192.168.123.103 port 6789 ... 2026-03-09T13:58:31.830 INFO:teuthology.orchestra.run.vm03.stdout:Base mon IP(s) is [192.168.123.103:3300, 192.168.123.103:6789], mon addrv is [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T13:58:31.833 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.103 metric 100 2026-03-09T13:58:31.833 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.103 metric 100 2026-03-09T13:58:31.835 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T13:58:31.835 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-09T13:58:31.837 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T13:58:31.837 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T13:58:31.837 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T13:58:31.837 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-09T13:58:31.837 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:3/64 scope link noprefixroute 2026-03-09T13:58:31.837 INFO:teuthology.orchestra.run.vm03.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T13:58:31.838 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.0/24` 2026-03-09T13:58:31.838 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.0/24` 2026-03-09T13:58:31.838 INFO:teuthology.orchestra.run.vm03.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-09T13:58:31.838 INFO:teuthology.orchestra.run.vm03.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T13:58:31.838 INFO:teuthology.orchestra.run.vm03.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T13:58:33.079 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-09T13:58:33.079 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T13:58:33.079 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Getting image source signatures 2026-03-09T13:58:33.079 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-09T13:58:33.079 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-09T13:58:33.079 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-09T13:58:33.079 INFO:teuthology.orchestra.run.vm03.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-09T13:58:33.201 INFO:teuthology.orchestra.run.vm03.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T13:58:33.201 INFO:teuthology.orchestra.run.vm03.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T13:58:33.201 INFO:teuthology.orchestra.run.vm03.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T13:58:33.343 INFO:teuthology.orchestra.run.vm03.stdout:stat: stdout 167 167 2026-03-09T13:58:33.343 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial keys... 2026-03-09T13:58:33.474 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQAJ0q5piKxFGRAAC0gmOZY1+MjOBNO8RaItZA== 2026-03-09T13:58:33.587 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQAJ0q5pf/6VIRAAJ3Ll40AnG2k7f9T/3IRUmQ== 2026-03-09T13:58:33.694 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQAJ0q5pfO9aJxAAEsB4Il0VaD2M1Vm3j8s7jw== 2026-03-09T13:58:33.695 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial monmap... 2026-03-09T13:58:33.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T13:58:33.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-09T13:58:33.830 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:33.830 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T13:58:33.830 INFO:teuthology.orchestra.run.vm03.stdout:monmaptool for a [v2:192.168.123.103:3300,v1:192.168.123.103:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T13:58:33.830 INFO:teuthology.orchestra.run.vm03.stdout:setting min_mon_release = quincy 2026-03-09T13:58:33.830 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: set fsid to f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:33.830 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T13:58:33.830 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:58:33.830 INFO:teuthology.orchestra.run.vm03.stdout:Creating mon... 2026-03-09T13:58:33.952 INFO:teuthology.orchestra.run.vm03.stdout:create mon.a on 2026-03-09T13:58:34.112 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-09T13:58:34.233 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T13:58:34.365 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4.target → /etc/systemd/system/ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4.target. 2026-03-09T13:58:34.365 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4.target → /etc/systemd/system/ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4.target. 2026-03-09T13:58:34.512 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.a 2026-03-09T13:58:34.512 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.a.service: Unit ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.a.service not loaded. 2026-03-09T13:58:34.658 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4.target.wants/ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.a.service → /etc/systemd/system/ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@.service. 2026-03-09T13:58:34.818 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T13:58:34.818 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T13:58:34.818 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon to start... 2026-03-09T13:58:34.818 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon... 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout id: f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout services: 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.127423s) 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout data: 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:mon is available 2026-03-09T13:58:35.004 INFO:teuthology.orchestra.run.vm03.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:35.181 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T13:58:35.182 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T13:58:35.182 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T13:58:35.182 INFO:teuthology.orchestra.run.vm03.stdout:Generating new minimal ceph.conf... 2026-03-09T13:58:35.380 INFO:teuthology.orchestra.run.vm03.stdout:Restarting the monitor... 2026-03-09T13:58:35.456 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 systemd[1]: Stopping Ceph mon.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T13:58:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a[52295]: 2026-03-09T13:58:35.453+0000 7f3dee807640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T13:58:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a[52295]: 2026-03-09T13:58:35.453+0000 7f3dee807640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T13:58:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 podman[52498]: 2026-03-09 13:58:35.555476282 +0000 UTC m=+0.115014471 container died da8802e0a41360083ea59c1e0c76dd585b45e2df222e5d9ef8cd9cad0631ccc7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS) 2026-03-09T13:58:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 podman[52498]: 2026-03-09 13:58:35.568699746 +0000 UTC m=+0.128237944 container remove da8802e0a41360083ea59c1e0c76dd585b45e2df222e5d9ef8cd9cad0631ccc7 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default) 2026-03-09T13:58:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 bash[52498]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a 2026-03-09T13:58:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.a.service: Deactivated successfully. 2026-03-09T13:58:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 systemd[1]: Stopped Ceph mon.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T13:58:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 systemd[1]: Starting Ceph mon.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T13:58:35.973 INFO:teuthology.orchestra.run.vm03.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 podman[52566]: 2026-03-09 13:58:35.706905047 +0000 UTC m=+0.010454121 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 podman[52566]: 2026-03-09 13:58:35.861572965 +0000 UTC m=+0.165122039 container create da7bf2a97f80ce3e01d1e1a92508b95921e5b1638288faf7b5865b7d8e821d7e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 podman[52566]: 2026-03-09 13:58:35.957741897 +0000 UTC m=+0.261290982 container init da7bf2a97f80ce3e01d1e1a92508b95921e5b1638288faf7b5865b7d8e821d7e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a, CEPH_REF=squid, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 podman[52566]: 2026-03-09 13:58:35.961119831 +0000 UTC m=+0.264668905 container start da7bf2a97f80ce3e01d1e1a92508b95921e5b1638288faf7b5865b7d8e821d7e (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid) 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 bash[52566]: da7bf2a97f80ce3e01d1e1a92508b95921e5b1638288faf7b5865b7d8e821d7e 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 systemd[1]: Started Ceph mon.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: pidfile_write: ignore empty --pid-file 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: load: jerasure load: lrc 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: RocksDB version: 7.9.2 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Git sha 0 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: DB SUMMARY 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: DB Session ID: A160BZRKS8OTW26IL4YR 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: CURRENT file: CURRENT 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 76817 ; 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.error_if_exists: 0 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.create_if_missing: 0 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.paranoid_checks: 1 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.env: 0x55fe0a979dc0 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.info_log: 0x55fe0b8b0700 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.statistics: (nil) 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.use_fsync: 0 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_log_file_size: 0 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T13:58:36.095 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.allow_fallocate: 1 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.use_direct_reads: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.db_log_dir: 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.wal_dir: 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.write_buffer_manager: 0x55fe0b8b5900 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.unordered_write: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.row_cache: None 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.wal_filter: None 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.two_write_queues: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.wal_compression: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.atomic_flush: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.log_readahead_size: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_background_jobs: 2 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_background_compactions: -1 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_subcompactions: 1 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_open_files: -1 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T13:58:36.096 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_background_flushes: -1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Compression algorithms supported: 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: kZSTD supported: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: kXpressCompression supported: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: kBZip2Compression supported: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: kLZ4Compression supported: 1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: kZlibCompression supported: 1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: kSnappyCompression supported: 1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.merge_operator: 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_filter: None 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fe0b8b0640) 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: cache_index_and_filter_blocks: 1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: pin_top_level_index_and_filter: 1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: index_type: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: data_block_index_type: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: index_shortening: 1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: checksum: 4 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: no_block_cache: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache: 0x55fe0b8d5350 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache_name: BinnedLRUCache 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache_options: 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: capacity : 536870912 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: num_shard_bits : 4 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: strict_capacity_limit : 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: high_pri_pool_ratio: 0.000 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: block_cache_compressed: (nil) 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: persistent_cache: (nil) 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: block_size: 4096 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: block_size_deviation: 10 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: block_restart_interval: 16 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: index_block_restart_interval: 1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: metadata_block_size: 4096 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: partition_filters: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: use_delta_encoding: 1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: filter_policy: bloomfilter 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: whole_key_filtering: 1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: verify_compression: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: read_amp_bytes_per_bit: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: format_version: 5 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: enable_index_compression: 1 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: block_align: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: max_auto_readahead_size: 262144 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: prepopulate_block_cache: 0 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: initial_auto_readahead_size: 8192 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compression: NoCompression 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T13:58:36.097 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.num_levels: 7 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.inplace_update_support: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.bloom_locality: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.max_successive_merges: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T13:58:36.098 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.ttl: 2592000 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.enable_blob_files: false 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.min_blob_size: 0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0c701596-62d3-4cd0-a28e-67fbd9f3f0cf 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773064715982894, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773064715984507, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 73671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 231, "table_properties": {"data_size": 71950, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 10026, "raw_average_key_size": 49, "raw_value_size": 66365, "raw_average_value_size": 328, "num_data_blocks": 8, "num_entries": 202, "num_filter_entries": 202, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773064715, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c701596-62d3-4cd0-a28e-67fbd9f3f0cf", "db_session_id": "A160BZRKS8OTW26IL4YR", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773064715984573, "job": 1, "event": "recovery_finished"} 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fe0b8d6e00 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: DB pointer 0x55fe0b9ec000 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: ** DB Stats ** 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: ** Compaction Stats [default] ** 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: L0 2/0 73.80 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 49.2 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Sum 2/0 73.80 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 49.2 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 49.2 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: ** Compaction Stats [default] ** 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 49.2 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Cumulative compaction: 0.00 GB write, 10.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Interval compaction: 0.00 GB write, 10.07 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Block cache BinnedLRUCache@0x55fe0b8d5350#2 capacity: 512.00 MB usage: 1.06 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: Block cache entry stats(count,size,portion): FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: starting mon.a rank 0 at public addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] at bind addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: mon.a@-1(???) e1 preinit fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: mon.a@-1(???).mds e1 new map 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: mon.a@-1(???).mds e1 print_map 2026-03-09T13:58:36.099 INFO:journalctl@ceph.mon.a.vm03.stdout: e1 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout: btime 2026-03-09T13:58:34:849802+0000 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout: legacy client fscid: -1 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout: 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout: No filesystems configured 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: mon.a@-1(???).mgr e0 loading version 1 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: mon.a@-1(???).mgr e1 active server: (0) 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:35 vm03 ceph-mon[52586]: mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:36 vm03 ceph-mon[52586]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:36 vm03 ceph-mon[52586]: monmap epoch 1 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:36 vm03 ceph-mon[52586]: fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:36 vm03 ceph-mon[52586]: last_changed 2026-03-09T13:58:33.785921+0000 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:36 vm03 ceph-mon[52586]: created 2026-03-09T13:58:33.785921+0000 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:36 vm03 ceph-mon[52586]: min_mon_release 19 (squid) 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:36 vm03 ceph-mon[52586]: election_strategy: 1 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:36 vm03 ceph-mon[52586]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:36 vm03 ceph-mon[52586]: fsmap 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:36 vm03 ceph-mon[52586]: osdmap e1: 0 total, 0 up, 0 in 2026-03-09T13:58:36.100 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:36 vm03 ceph-mon[52586]: mgrmap e1: no daemons active 2026-03-09T13:58:36.181 INFO:teuthology.orchestra.run.vm03.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T13:58:36.182 INFO:teuthology.orchestra.run.vm03.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T13:58:36.182 INFO:teuthology.orchestra.run.vm03.stdout:Creating mgr... 2026-03-09T13:58:36.182 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T13:58:36.183 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T13:58:36.322 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.y 2026-03-09T13:58:36.322 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.y.service: Unit ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.y.service not loaded. 2026-03-09T13:58:36.449 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4.target.wants/ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.y.service → /etc/systemd/system/ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@.service. 2026-03-09T13:58:36.596 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T13:58:36.596 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T13:58:36.596 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T13:58:36.596 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-09T13:58:36.596 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr to start... 2026-03-09T13:58:36.596 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr... 2026-03-09T13:58:36.719 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:36 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:36.689+0000 7f2ae7cfa140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4", 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T13:58:36.809 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T13:58:34:849802+0000", 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T13:58:34.850440+0000", 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:58:36.810 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (1/15)... 2026-03-09T13:58:37.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:36 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:36.734+0000 7f2ae7cfa140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T13:58:37.463 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:37 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/140323450' entity='client.admin' 2026-03-09T13:58:37.463 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:37 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3439335771' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T13:58:37.463 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:37 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:37.149+0000 7f2ae7cfa140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T13:58:37.792 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:37 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:37.460+0000 7f2ae7cfa140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T13:58:37.792 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:37 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T13:58:37.792 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:37 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T13:58:37.792 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:37 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: from numpy import show_config as show_numpy_config 2026-03-09T13:58:37.792 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:37 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:37.542+0000 7f2ae7cfa140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T13:58:37.792 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:37 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:37.577+0000 7f2ae7cfa140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T13:58:37.792 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:37 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:37.643+0000 7f2ae7cfa140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T13:58:38.384 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:38 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:38.121+0000 7f2ae7cfa140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T13:58:38.384 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:38 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:38.230+0000 7f2ae7cfa140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:58:38.384 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:38 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:38.266+0000 7f2ae7cfa140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T13:58:38.384 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:38 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:38.302+0000 7f2ae7cfa140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T13:58:38.384 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:38 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:38.343+0000 7f2ae7cfa140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T13:58:38.384 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:38 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:38.381+0000 7f2ae7cfa140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T13:58:38.792 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:38 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:38.544+0000 7f2ae7cfa140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T13:58:38.792 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:38 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:38.593+0000 7f2ae7cfa140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4", 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 2, 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T13:58:39.004 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T13:58:34:849802+0000", 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T13:58:34.850440+0000", 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:58:39.005 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (2/15)... 2026-03-09T13:58:39.116 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:39 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/980767895' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T13:58:39.116 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:38 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:38.807+0000 7f2ae7cfa140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T13:58:39.389 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:39 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:39.113+0000 7f2ae7cfa140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T13:58:39.390 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:39 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:39.148+0000 7f2ae7cfa140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T13:58:39.390 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:39 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:39.187+0000 7f2ae7cfa140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T13:58:39.390 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:39 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:39.260+0000 7f2ae7cfa140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T13:58:39.390 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:39 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:39.298+0000 7f2ae7cfa140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T13:58:39.390 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:39 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:39.386+0000 7f2ae7cfa140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T13:58:39.666 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:39 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:39.507+0000 7f2ae7cfa140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:58:40.026 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:39 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:39.663+0000 7f2ae7cfa140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T13:58:40.026 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:39 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:39.697+0000 7f2ae7cfa140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: Activating manager daemon y 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: mgrmap e2: y(active, starting, since 0.00390677s) 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: Manager daemon y is now available 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T13:58:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:40 vm03 ceph-mon[52586]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' 2026-03-09T13:58:41.311 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:41.311 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:58:41.311 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4", 2026-03-09T13:58:41.311 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T13:58:41.311 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T13:58:41.311 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T13:58:41.311 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T13:58:41.311 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:41.311 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T13:58:41.311 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T13:58:41.312 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T13:58:34:849802+0000", 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T13:58:34.850440+0000", 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:58:41.313 INFO:teuthology.orchestra.run.vm03.stdout:mgr is available 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T13:58:41.578 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T13:58:41.579 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T13:58:41.579 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T13:58:41.579 INFO:teuthology.orchestra.run.vm03.stdout:Enabling cephadm module... 2026-03-09T13:58:42.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:41 vm03 ceph-mon[52586]: mgrmap e3: y(active, since 1.00852s) 2026-03-09T13:58:42.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:41 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3970980266' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T13:58:42.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:41 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4218896828' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T13:58:42.995 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:42 vm03 ceph-mon[52586]: mgrmap e4: y(active, since 2s) 2026-03-09T13:58:42.995 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:42 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4287327191' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T13:58:42.995 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:42 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ignoring --setuser ceph since I am not root 2026-03-09T13:58:42.995 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:42 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ignoring --setgroup ceph since I am not root 2026-03-09T13:58:42.995 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:42 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:42.861+0000 7f228873c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T13:58:42.995 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:42 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:42.905+0000 7f228873c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T13:58:43.047 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:58:43.048 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-09T13:58:43.048 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T13:58:43.048 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T13:58:43.048 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T13:58:43.048 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:58:43.048 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-09T13:58:43.048 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 5... 2026-03-09T13:58:43.615 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:43.301+0000 7f228873c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T13:58:43.986 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:43 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4287327191' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T13:58:43.986 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:43 vm03 ceph-mon[52586]: mgrmap e5: y(active, since 3s) 2026-03-09T13:58:43.986 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:43 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3937136790' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T13:58:43.987 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:43.612+0000 7f228873c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T13:58:43.987 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T13:58:43.987 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T13:58:43.987 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: from numpy import show_config as show_numpy_config 2026-03-09T13:58:43.987 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:43.703+0000 7f228873c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T13:58:43.987 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:43.738+0000 7f228873c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T13:58:43.987 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:43.807+0000 7f228873c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T13:58:44.568 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:44 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:44.305+0000 7f228873c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T13:58:44.568 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:44 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:44.417+0000 7f228873c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:58:44.568 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:44 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:44.454+0000 7f228873c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T13:58:44.568 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:44 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:44.488+0000 7f228873c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T13:58:44.568 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:44 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:44.527+0000 7f228873c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T13:58:44.568 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:44 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:44.565+0000 7f228873c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T13:58:44.991 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:44 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:44.731+0000 7f228873c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T13:58:44.991 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:44 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:44.782+0000 7f228873c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T13:58:45.290 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:44 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:44.988+0000 7f228873c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T13:58:45.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:45.287+0000 7f228873c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T13:58:45.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:45.325+0000 7f228873c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T13:58:45.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:45.365+0000 7f228873c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T13:58:45.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:45.442+0000 7f228873c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T13:58:45.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:45.476+0000 7f228873c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T13:58:45.822 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:45.553+0000 7f228873c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T13:58:45.822 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:45.657+0000 7f228873c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:58:45.822 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:45.784+0000 7f228873c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:45.818+0000 7f228873c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: Active manager daemon y restarted 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: Activating manager daemon y 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: osdmap e2: 0 total, 0 up, 0 in 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: mgrmap e6: y(active, starting, since 0.00538484s) 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: Manager daemon y is now available 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:58:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T13:58:46.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:45 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T13:58:46.879 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:58:46.880 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-09T13:58:46.880 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T13:58:46.880 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:58:46.880 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 5 is available 2026-03-09T13:58:46.880 INFO:teuthology.orchestra.run.vm03.stdout:Setting orchestrator backend to cephadm... 2026-03-09T13:58:47.382 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:47 vm03 ceph-mon[52586]: Found migration_current of "None". Setting to last migration. 2026-03-09T13:58:47.383 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:47 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:47.383 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:47 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:47.383 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:47 vm03 ceph-mon[52586]: mgrmap e7: y(active, since 1.00988s) 2026-03-09T13:58:47.383 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:47 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:47.383 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:47 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:58:47.387 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T13:58:47.387 INFO:teuthology.orchestra.run.vm03.stdout:Generating ssh key... 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: Generating public/private ed25519 key pair. 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: Your identification has been saved in /tmp/tmp0a3gpadp/key 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: Your public key has been saved in /tmp/tmp0a3gpadp/key.pub 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: The key fingerprint is: 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: SHA256:CR/gScslc7JzI97WLe8Cc9Me1rpaHRUGK9ZxnI1WbAE ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: The key's randomart image is: 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: +--[ED25519 256]--+ 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: | * o E+O=| 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: | + @ . Bo=| 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: | X + o + ..| 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: | . B =... . | 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: | . S o.... | 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: | .o oo+... | 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: | + +oo. | 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: | .oo | 2026-03-09T13:58:47.652 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: | .oo. | 2026-03-09T13:58:47.653 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: +----[SHA256]-----+ 2026-03-09T13:58:47.914 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/jz36QecPq2WTKyERcoqWG/qkvEVctIuNPVILdqIP4 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:47.914 INFO:teuthology.orchestra.run.vm03.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T13:58:47.914 INFO:teuthology.orchestra.run.vm03.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T13:58:47.914 INFO:teuthology.orchestra.run.vm03.stdout:Adding host vm03... 2026-03-09T13:58:48.416 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: [09/Mar/2026:13:58:47] ENGINE Bus STARTING 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: [09/Mar/2026:13:58:47] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: [09/Mar/2026:13:58:47] ENGINE Client ('192.168.123.103', 33358) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: [09/Mar/2026:13:58:47] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: [09/Mar/2026:13:58:47] ENGINE Bus STARTED 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: Generating ssh key... 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:48.417 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:48 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:49.606 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Added host 'vm03' with addr '192.168.123.103' 2026-03-09T13:58:49.607 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mon service... 2026-03-09T13:58:49.861 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:49 vm03 ceph-mon[52586]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:58:49.861 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:49 vm03 ceph-mon[52586]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:58:49.861 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:49 vm03 ceph-mon[52586]: Deploying cephadm binary to vm03 2026-03-09T13:58:49.861 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:49 vm03 ceph-mon[52586]: mgrmap e8: y(active, since 2s) 2026-03-09T13:58:49.861 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:49 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:49.861 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:49 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:58:49.889 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T13:58:49.889 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mgr service... 2026-03-09T13:58:50.146 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T13:58:50.655 INFO:teuthology.orchestra.run.vm03.stdout:Enabling the dashboard module... 2026-03-09T13:58:51.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:50 vm03 ceph-mon[52586]: Added host vm03 2026-03-09T13:58:51.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:50 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:51.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:50 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:51.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:50 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3151910805' entity='client.admin' 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:52 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ignoring --setuser ceph since I am not root 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:52 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ignoring --setgroup ceph since I am not root 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:52 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:52.279+0000 7f658b3d3140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:52 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:52.330+0000 7f658b3d3140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:52 vm03 ceph-mon[52586]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:52 vm03 ceph-mon[52586]: Saving service mon spec with placement count:5 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:52 vm03 ceph-mon[52586]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:52 vm03 ceph-mon[52586]: Saving service mgr spec with placement count:2 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:52 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/333503822' entity='client.admin' 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:52 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3702495033' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:52 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:52.333 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:52 vm03 ceph-mon[52586]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:58:52.489 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:58:52.489 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-09T13:58:52.489 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T13:58:52.489 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-09T13:58:52.489 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T13:58:52.489 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:58:52.489 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-09T13:58:52.489 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 9... 2026-03-09T13:58:53.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:52 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:52.752+0000 7f658b3d3140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T13:58:53.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:53.063+0000 7f658b3d3140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T13:58:53.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T13:58:53.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T13:58:53.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: from numpy import show_config as show_numpy_config 2026-03-09T13:58:53.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:53.156+0000 7f658b3d3140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T13:58:53.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:53.190+0000 7f658b3d3140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T13:58:53.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:53.255+0000 7f658b3d3140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T13:58:53.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:53 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3702495033' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T13:58:53.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:53 vm03 ceph-mon[52586]: mgrmap e9: y(active, since 6s) 2026-03-09T13:58:53.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:53 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/244678179' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T13:58:54.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:53.734+0000 7f658b3d3140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T13:58:54.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:53.836+0000 7f658b3d3140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:58:54.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:53.874+0000 7f658b3d3140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T13:58:54.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:53.906+0000 7f658b3d3140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T13:58:54.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:53.945+0000 7f658b3d3140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T13:58:54.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:53.979+0000 7f658b3d3140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T13:58:54.408 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:54.150+0000 7f658b3d3140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T13:58:54.408 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:54.197+0000 7f658b3d3140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T13:58:54.688 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:54.405+0000 7f658b3d3140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T13:58:54.951 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:54.685+0000 7f658b3d3140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T13:58:54.951 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:54.722+0000 7f658b3d3140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T13:58:54.951 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:54.761+0000 7f658b3d3140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T13:58:54.951 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:54.833+0000 7f658b3d3140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T13:58:54.951 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:54.872+0000 7f658b3d3140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T13:58:55.228 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:54.947+0000 7f658b3d3140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T13:58:55.228 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:55 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:55.054+0000 7f658b3d3140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:58:55.228 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:55 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:55.189+0000 7f658b3d3140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: Active manager daemon y restarted 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: Activating manager daemon y 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: osdmap e3: 0 total, 0 up, 0 in 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: mgrmap e10: y(active, starting, since 0.00666879s) 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: Manager daemon y is now available 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:55 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T13:58:55.543 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:58:55 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:58:55.225+0000 7f658b3d3140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T13:58:56.300 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T13:58:56.300 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-09T13:58:56.301 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T13:58:56.301 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T13:58:56.301 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 9 is available 2026-03-09T13:58:56.301 INFO:teuthology.orchestra.run.vm03.stdout:Generating a dashboard self-signed certificate... 2026-03-09T13:58:56.663 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T13:58:56.663 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial admin user... 2026-03-09T13:58:57.034 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:56 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:58:57.034 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:56 vm03 ceph-mon[52586]: [09/Mar/2026:13:58:56] ENGINE Bus STARTING 2026-03-09T13:58:57.034 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:56 vm03 ceph-mon[52586]: [09/Mar/2026:13:58:56] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T13:58:57.034 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:56 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:58:57.034 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:56 vm03 ceph-mon[52586]: mgrmap e11: y(active, since 1.01092s) 2026-03-09T13:58:57.034 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:56 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:58:57.034 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:56 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:58:57.061 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$yzhnVpklUq7rxRz7PsE2Nu1k6XITVFlJqMDfgY5dNxg.04xpIkfLi", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773064737, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T13:58:57.061 INFO:teuthology.orchestra.run.vm03.stdout:Fetching dashboard port number... 2026-03-09T13:58:57.383 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T13:58:57.384 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T13:58:57.384 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T13:58:57.384 INFO:teuthology.orchestra.run.vm03.stdout:Ceph Dashboard is now available at: 2026-03-09T13:58:57.384 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:58:57.384 INFO:teuthology.orchestra.run.vm03.stdout: URL: https://vm03.local:8443/ 2026-03-09T13:58:57.384 INFO:teuthology.orchestra.run.vm03.stdout: User: admin 2026-03-09T13:58:57.384 INFO:teuthology.orchestra.run.vm03.stdout: Password: gl88dggl7p 2026-03-09T13:58:57.384 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:58:57.384 INFO:teuthology.orchestra.run.vm03.stdout:Saving cluster configuration to /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config directory 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout: sudo /sbin/cephadm shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout: sudo /sbin/cephadm shell 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:58:57.670 INFO:teuthology.orchestra.run.vm03.stdout: ceph telemetry on 2026-03-09T13:58:57.671 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:58:57.671 INFO:teuthology.orchestra.run.vm03.stdout:For more information see: 2026-03-09T13:58:57.671 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:58:57.671 INFO:teuthology.orchestra.run.vm03.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T13:58:57.671 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:58:57.671 INFO:teuthology.orchestra.run.vm03.stdout:Bootstrap complete. 2026-03-09T13:58:57.702 INFO:tasks.cephadm:Fetching config... 2026-03-09T13:58:57.702 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:58:57.702 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T13:58:57.716 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T13:58:57.716 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:58:57.716 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T13:58:57.785 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T13:58:57.785 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:58:57.785 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/keyring of=/dev/stdout 2026-03-09T13:58:57.851 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T13:58:57.851 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:58:57.851 DEBUG:teuthology.orchestra.run.vm03:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T13:58:57.907 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T13:58:57.907 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/jz36QecPq2WTKyERcoqWG/qkvEVctIuNPVILdqIP4 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T13:58:57.983 INFO:teuthology.orchestra.run.vm03.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/jz36QecPq2WTKyERcoqWG/qkvEVctIuNPVILdqIP4 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:57.993 DEBUG:teuthology.orchestra.run.vm04:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/jz36QecPq2WTKyERcoqWG/qkvEVctIuNPVILdqIP4 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T13:58:58.023 INFO:teuthology.orchestra.run.vm04.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/jz36QecPq2WTKyERcoqWG/qkvEVctIuNPVILdqIP4 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:58:58.033 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T13:58:58.222 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:58:58.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:58 vm03 ceph-mon[52586]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T13:58:58.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:58 vm03 ceph-mon[52586]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T13:58:58.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:58 vm03 ceph-mon[52586]: [09/Mar/2026:13:58:56] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T13:58:58.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:58 vm03 ceph-mon[52586]: [09/Mar/2026:13:58:56] ENGINE Bus STARTED 2026-03-09T13:58:58.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:58 vm03 ceph-mon[52586]: [09/Mar/2026:13:58:56] ENGINE Client ('192.168.123.103', 42924) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T13:58:58.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:58 vm03 ceph-mon[52586]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:58:58.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:58 vm03 ceph-mon[52586]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:58:58.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:58 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:58:58.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:58 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/84120465' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T13:58:58.246 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:58 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2692635761' entity='client.admin' 2026-03-09T13:58:58.525 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T13:58:58.526 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T13:58:58.718 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:58:59.049 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm04 2026-03-09T13:58:59.049 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:58:59.049 DEBUG:teuthology.orchestra.run.vm04:> dd of=/etc/ceph/ceph.conf 2026-03-09T13:58:59.070 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:58:59.070 DEBUG:teuthology.orchestra.run.vm04:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T13:58:59.134 INFO:tasks.cephadm:Adding host vm04 to orchestrator... 2026-03-09T13:58:59.134 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch host add vm04 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: mgrmap e12: y(active, since 2s) 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1551614139' entity='client.admin' 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:58:59.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:58:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:58:59.313 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:00 vm03 ceph-mon[52586]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:00 vm03 ceph-mon[52586]: Updating vm03:/etc/ceph/ceph.conf 2026-03-09T13:59:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:00 vm03 ceph-mon[52586]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T13:59:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:00 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:00 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:00 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:01.081 INFO:teuthology.orchestra.run.vm03.stdout:Added host 'vm04' with addr '192.168.123.104' 2026-03-09T13:59:01.145 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch host ls --format=json 2026-03-09T13:59:01.325 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:01.364 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:01 vm03 ceph-mon[52586]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T13:59:01.364 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:01 vm03 ceph-mon[52586]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.client.admin.keyring 2026-03-09T13:59:01.364 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:01 vm03 ceph-mon[52586]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:01.364 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:01 vm03 ceph-mon[52586]: Deploying cephadm binary to vm04 2026-03-09T13:59:01.569 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:59:01.569 INFO:teuthology.orchestra.run.vm03.stdout:[{"addr": "192.168.123.103", "hostname": "vm03", "labels": [], "status": ""}, {"addr": "192.168.123.104", "hostname": "vm04", "labels": [], "status": ""}] 2026-03-09T13:59:01.632 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T13:59:01.632 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd crush tunables default 2026-03-09T13:59:01.787 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:02.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:02 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:02.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:02 vm03 ceph-mon[52586]: Added host vm04 2026-03-09T13:59:02.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:02 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:02.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:02 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:02.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:02 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:02.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:02 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1820961916' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T13:59:02.451 INFO:teuthology.orchestra.run.vm03.stderr:adjusted tunables profile to default 2026-03-09T13:59:02.515 INFO:tasks.cephadm:Adding mon.a on vm03 2026-03-09T13:59:02.515 INFO:tasks.cephadm:Adding mon.c on vm03 2026-03-09T13:59:02.515 INFO:tasks.cephadm:Adding mon.b on vm04 2026-03-09T13:59:02.515 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch apply mon '3;vm03:192.168.123.103=a;vm03:[v2:192.168.123.103:3301,v1:192.168.123.103:6790]=c;vm04:192.168.123.104=b' 2026-03-09T13:59:02.734 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T13:59:02.777 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T13:59:03.019 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled mon update... 2026-03-09T13:59:03.089 DEBUG:teuthology.orchestra.run.vm03:mon.c> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.c.service 2026-03-09T13:59:03.090 DEBUG:teuthology.orchestra.run.vm04:mon.b> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.b.service 2026-03-09T13:59:03.092 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T13:59:03.092 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph mon dump -f json 2026-03-09T13:59:03.308 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T13:59:03.351 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-09T13:59:03.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:03 vm03 ceph-mon[52586]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T13:59:03.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:03 vm03 ceph-mon[52586]: mgrmap e13: y(active, since 6s) 2026-03-09T13:59:03.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:03 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:03.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:03 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1820961916' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T13:59:03.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:03 vm03 ceph-mon[52586]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:59:03.367 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:03 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:03.678 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:59:03.678 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":1,"fsid":"f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4","modified":"2026-03-09T13:58:33.785921Z","created":"2026-03-09T13:58:33.785921Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T13:59:03.678 INFO:teuthology.orchestra.run.vm04.stderr:dumped monmap epoch 1 2026-03-09T13:59:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:04 vm03 ceph-mon[52586]: from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm03:[v2:192.168.123.103:3301,v1:192.168.123.103:6790]=c;vm04:192.168.123.104=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:04 vm03 ceph-mon[52586]: Saving service mon spec with placement vm03:192.168.123.103=a;vm03:[v2:192.168.123.103:3301,v1:192.168.123.103:6790]=c;vm04:192.168.123.104=b;count:3 2026-03-09T13:59:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:04 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/3030667033' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T13:59:04.731 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T13:59:04.731 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph mon dump -f json 2026-03-09T13:59:05.025 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T13:59:05.454 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:59:05.454 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":1,"fsid":"f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4","modified":"2026-03-09T13:58:33.785921Z","created":"2026-03-09T13:58:33.785921Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T13:59:05.454 INFO:teuthology.orchestra.run.vm04.stderr:dumped monmap epoch 1 2026-03-09T13:59:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:05 vm03 ceph-mon[52586]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T13:59:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:05 vm03 ceph-mon[52586]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T13:59:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:05 vm03 ceph-mon[52586]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T13:59:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:05 vm03 ceph-mon[52586]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.client.admin.keyring 2026-03-09T13:59:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:05 vm03 ceph-mon[52586]: Deploying daemon mon.b on vm04 2026-03-09T13:59:05.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:05 vm04 ceph-mon[54203]: mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T13:59:06.348 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 systemd[1]: Starting Ceph mon.c for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T13:59:06.536 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T13:59:06.536 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph mon dump -f json 2026-03-09T13:59:06.720 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 podman[58980]: 2026-03-09 13:59:06.345334367 +0000 UTC m=+0.015536524 container create d5ebe213e31b506ec02c9f8f9c5c1ca7b37880c4fdabd316fcef7d9e0707506a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-c, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True) 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 podman[58980]: 2026-03-09 13:59:06.38132023 +0000 UTC m=+0.051522377 container init d5ebe213e31b506ec02c9f8f9c5c1ca7b37880c4fdabd316fcef7d9e0707506a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-c, org.opencontainers.image.authors=Ceph Release Team , ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.41.3) 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 podman[58980]: 2026-03-09 13:59:06.384282685 +0000 UTC m=+0.054484842 container start d5ebe213e31b506ec02c9f8f9c5c1ca7b37880c4fdabd316fcef7d9e0707506a (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-c, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 bash[58980]: d5ebe213e31b506ec02c9f8f9c5c1ca7b37880c4fdabd316fcef7d9e0707506a 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 podman[58980]: 2026-03-09 13:59:06.338873032 +0000 UTC m=+0.009075199 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 systemd[1]: Started Ceph mon.c for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: set uid:gid to 167:167 (ceph:ceph) 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: pidfile_write: ignore empty --pid-file 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: load: jerasure load: lrc 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: RocksDB version: 7.9.2 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Git sha 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: DB SUMMARY 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: DB Session ID: JWKSME4QCLBI3PCTNBJJ 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: CURRENT file: CURRENT 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: IDENTITY file: IDENTITY 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 0, files: 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000004.log size: 511 ; 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.error_if_exists: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.create_if_missing: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.paranoid_checks: 1 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.env: 0x561a26aafdc0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.fs: PosixFileSystem 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.info_log: 0x561a27991880 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_file_opening_threads: 16 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.statistics: (nil) 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.use_fsync: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_log_file_size: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.keep_log_file_num: 1000 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.recycle_log_file_num: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.allow_fallocate: 1 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.allow_mmap_reads: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.allow_mmap_writes: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.use_direct_reads: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.create_missing_column_families: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.db_log_dir: 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.wal_dir: 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.advise_random_on_open: 1 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.db_write_buffer_size: 0 2026-03-09T13:59:06.795 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.write_buffer_manager: 0x561a27995900 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.rate_limiter: (nil) 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.wal_recovery_mode: 2 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.enable_thread_tracking: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.enable_pipelined_write: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.unordered_write: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.row_cache: None 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.wal_filter: None 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.allow_ingest_behind: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.two_write_queues: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.manual_wal_flush: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.wal_compression: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.atomic_flush: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.log_readahead_size: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.best_efforts_recovery: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.allow_data_in_errors: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.db_host_id: __hostname__ 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_background_jobs: 2 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_background_compactions: -1 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_subcompactions: 1 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_total_wal_size: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_open_files: -1 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bytes_per_sync: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_readahead_size: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_background_flushes: -1 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Compression algorithms supported: 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: kZSTD supported: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: kXpressCompression supported: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: kBZip2Compression supported: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: kLZ4Compression supported: 1 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: kZlibCompression supported: 1 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: kLZ4HCCompression supported: 1 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: kSnappyCompression supported: 1 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T13:59:06.796 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.merge_operator: 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_filter: None 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_filter_factory: None 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.sst_partitioner_factory: None 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561a279914e0) 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: cache_index_and_filter_blocks: 1 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: pin_top_level_index_and_filter: 1 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: index_type: 0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: data_block_index_type: 0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: index_shortening: 1 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: checksum: 4 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: no_block_cache: 0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: block_cache: 0x561a279b49b0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: block_cache_name: BinnedLRUCache 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: block_cache_options: 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: capacity : 536870912 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: num_shard_bits : 4 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: strict_capacity_limit : 0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: high_pri_pool_ratio: 0.000 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: block_cache_compressed: (nil) 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: persistent_cache: (nil) 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: block_size: 4096 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: block_size_deviation: 10 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: block_restart_interval: 16 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: index_block_restart_interval: 1 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: metadata_block_size: 4096 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: partition_filters: 0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: use_delta_encoding: 1 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: filter_policy: bloomfilter 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: whole_key_filtering: 1 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: verify_compression: 0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: read_amp_bytes_per_bit: 0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: format_version: 5 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: enable_index_compression: 1 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: block_align: 0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: max_auto_readahead_size: 262144 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: prepopulate_block_cache: 0 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: initial_auto_readahead_size: 8192 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout: num_file_reads_for_auto_readahead: 2 2026-03-09T13:59:06.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.write_buffer_size: 33554432 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_write_buffer_number: 2 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compression: NoCompression 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bottommost_compression: Disabled 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.prefix_extractor: nullptr 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.num_levels: 7 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compression_opts.level: 32767 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compression_opts.strategy: 0 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compression_opts.enabled: false 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.target_file_size_base: 67108864 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.arena_block_size: 1048576 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.disable_auto_compactions: 0 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T13:59:06.798 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.inplace_update_support: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.bloom_locality: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.max_successive_merges: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.paranoid_file_checks: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.force_consistency_checks: 1 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.report_bg_io_stats: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.ttl: 2592000 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.enable_blob_files: false 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.min_blob_size: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.blob_file_size: 268435456 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.blob_file_starting_level: 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 60f6efd7-ac5f-45d4-95fe-877d15f6d6af 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773064746419550, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773064746420240, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773064746, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "60f6efd7-ac5f-45d4-95fe-877d15f6d6af", "db_session_id": "JWKSME4QCLBI3PCTNBJJ", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773064746420299, "job": 1, "event": "recovery_finished"} 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561a279b6e00 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: DB pointer 0x561a27ac8000 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: ** DB Stats ** 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: ** Compaction Stats [default] ** 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T13:59:06.799 INFO:journalctl@ceph.mon.c.vm03.stdout: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 2.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: ** Compaction Stats [default] ** 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: Cumulative compaction: 0.00 GB write, 0.27 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: Interval compaction: 0.00 GB write, 0.27 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: Block cache BinnedLRUCache@0x561a279b49b0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-06 secs_since: 0 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: using public_addrv [v2:192.168.123.103:3301/0,v1:192.168.123.103:6790/0] 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: starting mon.c rank -1 at public addrs [v2:192.168.123.103:3301/0,v1:192.168.123.103:6790/0] at bind addrs [v2:192.168.123.103:3301/0,v1:192.168.123.103:6790/0] mon_data /var/lib/ceph/mon/ceph-c fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(???) e0 preinit fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).mds e1 new map 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).mds e1 print_map 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: e1 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: btime 2026-03-09T13:58:34:849802+0000 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: legacy client fscid: -1 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout: No filesystems configured 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mkfs f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: monmap epoch 1 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: last_changed 2026-03-09T13:58:33.785921+0000 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: created 2026-03-09T13:58:33.785921+0000 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: min_mon_release 19 (squid) 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: election_strategy: 1 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: fsmap 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: osdmap e1: 0 total, 0 up, 0 in 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e1: no daemons active 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/140323450' entity='client.admin' 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3439335771' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/980767895' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Activating manager daemon y 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e2: y(active, starting, since 0.00390677s) 2026-03-09T13:59:06.800 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Manager daemon y is now available 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14100 192.168.123.103:0/195757508' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e3: y(active, since 1.00852s) 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3970980266' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4218896828' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e4: y(active, since 2s) 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4287327191' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4287327191' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e5: y(active, since 3s) 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3937136790' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Active manager daemon y restarted 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Activating manager daemon y 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: osdmap e2: 0 total, 0 up, 0 in 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e6: y(active, starting, since 0.00538484s) 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Manager daemon y is now available 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Found migration_current of "None". Setting to last migration. 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e7: y(active, since 1.00988s) 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: [09/Mar/2026:13:58:47] ENGINE Bus STARTING 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: [09/Mar/2026:13:58:47] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: [09/Mar/2026:13:58:47] ENGINE Client ('192.168.123.103', 33358) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: [09/Mar/2026:13:58:47] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: [09/Mar/2026:13:58:47] ENGINE Bus STARTED 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Generating ssh key... 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Deploying cephadm binary to vm03 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e8: y(active, since 2s) 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Added host vm03 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.801 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3151910805' entity='client.admin' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Saving service mon spec with placement count:5 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Saving service mgr spec with placement count:2 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/333503822' entity='client.admin' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3702495033' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14118 192.168.123.103:0/1748227341' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3702495033' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e9: y(active, since 6s) 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/244678179' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Active manager daemon y restarted 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Activating manager daemon y 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: osdmap e3: 0 total, 0 up, 0 in 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e10: y(active, starting, since 0.00666879s) 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Manager daemon y is now available 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: [09/Mar/2026:13:58:56] ENGINE Bus STARTING 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: [09/Mar/2026:13:58:56] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e11: y(active, since 1.01092s) 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: [09/Mar/2026:13:58:56] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: [09/Mar/2026:13:58:56] ENGINE Bus STARTED 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: [09/Mar/2026:13:58:56] ENGINE Client ('192.168.123.103', 42924) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/84120465' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2692635761' entity='client.admin' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e12: y(active, since 2s) 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1551614139' entity='client.admin' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Updating vm03:/etc/ceph/ceph.conf 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T13:59:06.802 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.client.admin.keyring 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Deploying cephadm binary to vm04 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Added host vm04 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1820961916' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mgrmap e13: y(active, since 6s) 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1820961916' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm03:[v2:192.168.123.103:3301,v1:192.168.123.103:6790]=c;vm04:192.168.123.104=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Saving service mon spec with placement vm03:192.168.123.103=a;vm03:[v2:192.168.123.103:3301,v1:192.168.123.103:6790]=c;vm04:192.168.123.104=b;count:3 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/3030667033' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.client.admin.keyring 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: Deploying daemon mon.b on vm04 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).mgr e0 loading version 13 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).mgr e13 active server: v2:192.168.123.103:6800/1991233681(14150) 2026-03-09T13:59:06.803 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:06 vm03 ceph-mon[58994]: mon.c@-1(synchronizing).mgr e13 mkfs or daemon transitioned to available, loading commands 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: Deploying daemon mon.c on vm03 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: mon.a calling monitor election 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: mon.b calling monitor election 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:10.929 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: monmap epoch 2 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: last_changed 2026-03-09T13:59:05.603859+0000 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: created 2026-03-09T13:58:33.785921+0000 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: min_mon_release 19 (squid) 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: election_strategy: 1 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: 1: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: fsmap 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: mgrmap e13: y(active, since 15s) 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: overall HEALTH_OK 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:10.930 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: Deploying daemon mon.c on vm03 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: mon.a calling monitor election 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: mon.b calling monitor election 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: monmap epoch 2 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: last_changed 2026-03-09T13:59:05.603859+0000 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: created 2026-03-09T13:58:33.785921+0000 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: min_mon_release 19 (squid) 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: election_strategy: 1 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: 1: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: fsmap 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: mgrmap e13: y(active, since 15s) 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: overall HEALTH_OK 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:10.949 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:10.950 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:11.389 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:59:11.389 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":2,"fsid":"f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4","modified":"2026-03-09T13:59:05.603859Z","created":"2026-03-09T13:58:33.785921Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T13:59:11.389 INFO:teuthology.orchestra.run.vm04.stderr:dumped monmap epoch 2 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: Updating vm03:/etc/ceph/ceph.conf 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/4113719390' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:12.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:12.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:12.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:12.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:12.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:12.450 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T13:59:12.450 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph mon dump -f json 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: Updating vm03:/etc/ceph/ceph.conf 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/4113719390' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.474 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:12.475 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:12.475 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:12.475 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.475 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.475 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:12.475 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:12.475 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:12.608 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T13:59:12.792 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:59:12 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:59:12.602+0000 7f6557736640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-09T13:59:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: Deploying daemon mon.c on vm03 2026-03-09T13:59:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: mon.a calling monitor election 2026-03-09T13:59:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: mon.b calling monitor election 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: monmap epoch 2 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: last_changed 2026-03-09T13:59:05.603859+0000 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: created 2026-03-09T13:58:33.785921+0000 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: min_mon_release 19 (squid) 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: election_strategy: 1 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: 1: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: fsmap 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: mgrmap e13: y(active, since 15s) 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: overall HEALTH_OK 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: Updating vm03:/etc/ceph/ceph.conf 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/4113719390' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:12.794 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: mon.b calling monitor election 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: mon.a calling monitor election 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: monmap epoch 3 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: last_changed 2026-03-09T13:59:12.449654+0000 2026-03-09T13:59:17.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: created 2026-03-09T13:58:33.785921+0000 2026-03-09T13:59:17.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: min_mon_release 19 (squid) 2026-03-09T13:59:17.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: election_strategy: 1 2026-03-09T13:59:17.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T13:59:17.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: 1: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T13:59:17.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: 2: [v2:192.168.123.103:3301/0,v1:192.168.123.103:6790/0] mon.c 2026-03-09T13:59:17.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: fsmap 2026-03-09T13:59:17.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:59:17.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: mgrmap e13: y(active, since 22s) 2026-03-09T13:59:17.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: overall HEALTH_OK 2026-03-09T13:59:17.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: mon.b calling monitor election 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: mon.a calling monitor election 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: monmap epoch 3 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: last_changed 2026-03-09T13:59:12.449654+0000 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: created 2026-03-09T13:58:33.785921+0000 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: min_mon_release 19 (squid) 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: election_strategy: 1 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: 1: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: 2: [v2:192.168.123.103:3301/0,v1:192.168.123.103:6790/0] mon.c 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: fsmap 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: mgrmap e13: y(active, since 22s) 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: overall HEALTH_OK 2026-03-09T13:59:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:18.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:18 vm03 ceph-mon[52586]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:18.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:18 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:18.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:18 vm04 ceph-mon[54203]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:18.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:18 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:18.865 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T13:59:18.865 INFO:teuthology.orchestra.run.vm04.stdout:{"epoch":3,"fsid":"f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4","modified":"2026-03-09T13:59:12.449654Z","created":"2026-03-09T13:58:33.785921Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3301","nonce":0},{"type":"v1","addr":"192.168.123.103:6790","nonce":0}]},"addr":"192.168.123.103:6790/0","public_addr":"192.168.123.103:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T13:59:18.865 INFO:teuthology.orchestra.run.vm04.stderr:dumped monmap epoch 3 2026-03-09T13:59:18.944 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T13:59:18.944 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph config generate-minimal-conf 2026-03-09T13:59:19.105 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:19.325 INFO:teuthology.orchestra.run.vm03.stdout:# minimal ceph.conf for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:19.326 INFO:teuthology.orchestra.run.vm03.stdout:[global] 2026-03-09T13:59:19.326 INFO:teuthology.orchestra.run.vm03.stdout: fsid = f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:19.326 INFO:teuthology.orchestra.run.vm03.stdout: mon_host = [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] [v2:192.168.123.103:3301/0,v1:192.168.123.103:6790/0] 2026-03-09T13:59:19.369 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T13:59:19.370 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:59:19.370 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T13:59:19.395 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:59:19.395 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T13:59:19.460 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:59:19.460 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T13:59:19.492 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:59:19.492 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: mon.b calling monitor election 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: mon.a calling monitor election 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:19.519 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: monmap epoch 3 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: last_changed 2026-03-09T13:59:12.449654+0000 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: created 2026-03-09T13:58:33.785921+0000 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: min_mon_release 19 (squid) 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: election_strategy: 1 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: 1: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: 2: [v2:192.168.123.103:3301/0,v1:192.168.123.103:6790/0] mon.c 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: fsmap 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: mgrmap e13: y(active, since 22s) 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: overall HEALTH_OK 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:19.520 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:19.556 INFO:tasks.cephadm:Adding mgr.y on vm03 2026-03-09T13:59:19.556 INFO:tasks.cephadm:Adding mgr.x on vm04 2026-03-09T13:59:19.556 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch apply mgr '2;vm03=y;vm04=x' 2026-03-09T13:59:19.740 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: mon.c calling monitor election 2026-03-09T13:59:19.740 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: mon.c calling monitor election 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: mon.b calling monitor election 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: mon.a calling monitor election 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: monmap epoch 3 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: last_changed 2026-03-09T13:59:12.449654+0000 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: created 2026-03-09T13:58:33.785921+0000 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: min_mon_release 19 (squid) 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: election_strategy: 1 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: 1: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: 2: [v2:192.168.123.103:3301/0,v1:192.168.123.103:6790/0] mon.c 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: fsmap 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: mgrmap e13: y(active, since 24s) 2026-03-09T13:59:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:19 vm04 ceph-mon[54203]: overall HEALTH_OK 2026-03-09T13:59:19.750 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: mon.c calling monitor election 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: mon.c calling monitor election 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: mon.b calling monitor election 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: mon.a calling monitor election 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: monmap epoch 3 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: last_changed 2026-03-09T13:59:12.449654+0000 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: created 2026-03-09T13:58:33.785921+0000 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: min_mon_release 19 (squid) 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: election_strategy: 1 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: 1: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: 2: [v2:192.168.123.103:3301/0,v1:192.168.123.103:6790/0] mon.c 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: fsmap 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: mgrmap e13: y(active, since 24s) 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[52586]: overall HEALTH_OK 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: mon.c calling monitor election 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: mon.c calling monitor election 2026-03-09T13:59:19.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: mon.b calling monitor election 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: mon.a calling monitor election 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: monmap epoch 3 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: last_changed 2026-03-09T13:59:12.449654+0000 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: created 2026-03-09T13:58:33.785921+0000 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: min_mon_release 19 (squid) 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: election_strategy: 1 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: 1: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: 2: [v2:192.168.123.103:3301/0,v1:192.168.123.103:6790/0] mon.c 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: fsmap 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: osdmap e4: 0 total, 0 up, 0 in 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: mgrmap e13: y(active, since 24s) 2026-03-09T13:59:19.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:19 vm03 ceph-mon[58994]: overall HEALTH_OK 2026-03-09T13:59:19.983 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled mgr update... 2026-03-09T13:59:20.032 DEBUG:teuthology.orchestra.run.vm04:mgr.x> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.x.service 2026-03-09T13:59:20.034 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T13:59:20.034 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T13:59:20.034 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T13:59:20.055 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:59:20.055 DEBUG:teuthology.orchestra.run.vm03:> ls /dev/[sv]d? 2026-03-09T13:59:20.116 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vda 2026-03-09T13:59:20.117 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdb 2026-03-09T13:59:20.117 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdc 2026-03-09T13:59:20.117 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdd 2026-03-09T13:59:20.117 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vde 2026-03-09T13:59:20.117 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T13:59:20.117 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T13:59:20.117 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdb 2026-03-09T13:59:20.178 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdb 2026-03-09T13:59:20.178 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:59:20.179 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-09T13:59:20.179 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:59:20.179 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:59:20.179 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 13:58:58.588510830 +0000 2026-03-09T13:59:20.179 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 13:41:22.326780316 +0000 2026-03-09T13:59:20.179 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 13:41:22.326780316 +0000 2026-03-09T13:59:20.179 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-09 13:38:51.273000000 +0000 2026-03-09T13:59:20.179 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T13:59:20.248 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T13:59:20.249 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T13:59:20.249 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000135042 s, 3.8 MB/s 2026-03-09T13:59:20.250 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T13:59:20.307 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdc 2026-03-09T13:59:20.367 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdc 2026-03-09T13:59:20.367 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:59:20.367 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-09T13:59:20.367 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:59:20.367 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:59:20.367 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 13:58:58.622510871 +0000 2026-03-09T13:59:20.367 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 13:41:22.261780251 +0000 2026-03-09T13:59:20.367 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 13:41:22.261780251 +0000 2026-03-09T13:59:20.367 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-09 13:38:51.280000000 +0000 2026-03-09T13:59:20.367 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T13:59:20.430 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T13:59:20.430 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T13:59:20.430 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000202099 s, 2.5 MB/s 2026-03-09T13:59:20.432 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T13:59:20.489 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdd 2026-03-09T13:59:20.549 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdd 2026-03-09T13:59:20.549 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:59:20.549 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-09T13:59:20.549 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:59:20.549 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:59:20.549 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 13:58:58.670510930 +0000 2026-03-09T13:59:20.549 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 13:41:22.279780269 +0000 2026-03-09T13:59:20.549 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 13:41:22.279780269 +0000 2026-03-09T13:59:20.550 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-09 13:38:51.284000000 +0000 2026-03-09T13:59:20.550 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T13:59:20.612 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T13:59:20.612 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T13:59:20.612 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000180439 s, 2.8 MB/s 2026-03-09T13:59:20.613 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T13:59:20.670 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vde 2026-03-09T13:59:20.726 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vde 2026-03-09T13:59:20.726 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:59:20.726 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-09T13:59:20.726 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:59:20.726 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:59:20.726 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 13:58:58.717510987 +0000 2026-03-09T13:59:20.726 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 13:41:22.256780245 +0000 2026-03-09T13:59:20.726 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 13:41:22.256780245 +0000 2026-03-09T13:59:20.726 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-09 13:38:51.302000000 +0000 2026-03-09T13:59:20.726 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T13:59:20.790 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T13:59:20.790 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T13:59:20.790 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000196528 s, 2.6 MB/s 2026-03-09T13:59:20.791 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T13:59:20.848 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T13:59:20.848 DEBUG:teuthology.orchestra.run.vm04:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T13:59:20.868 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T13:59:20.869 DEBUG:teuthology.orchestra.run.vm04:> ls /dev/[sv]d? 2026-03-09T13:59:20.925 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vda 2026-03-09T13:59:20.925 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdb 2026-03-09T13:59:20.925 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdc 2026-03-09T13:59:20.925 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdd 2026-03-09T13:59:20.925 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vde 2026-03-09T13:59:20.925 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T13:59:20.925 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T13:59:20.925 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdb 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=y;vm04=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: Saving service mgr spec with placement vm03=y;vm04=x;count:2 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: Updating vm03:/etc/ceph/ceph.conf 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.987 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=y;vm04=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: Saving service mgr spec with placement vm03=y;vm04=x;count:2 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: Updating vm03:/etc/ceph/ceph.conf 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:20.988 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:20.991 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdb 2026-03-09T13:59:20.991 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:59:20.991 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-09T13:59:20.991 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:59:20.991 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:59:20.991 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 13:59:03.364315451 +0000 2026-03-09T13:59:20.991 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 13:41:22.921638330 +0000 2026-03-09T13:59:20.991 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 13:41:22.921638330 +0000 2026-03-09T13:59:20.992 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 13:38:20.231000000 +0000 2026-03-09T13:59:20.992 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='client.14205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=y;vm04=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: Saving service mgr spec with placement vm03=y;vm04=x;count:2 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: Updating vm03:/etc/ceph/ceph.conf 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T13:59:21.061 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:21.062 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:21.062 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:21.062 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:21.062 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:21.062 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T13:59:21.062 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T13:59:21.062 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T13:59:21.062 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:21.062 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:21.066 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T13:59:21.066 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T13:59:21.066 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000117631 s, 4.4 MB/s 2026-03-09T13:59:21.067 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T13:59:21.126 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdc 2026-03-09T13:59:21.258 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdc 2026-03-09T13:59:21.258 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:59:21.258 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-09T13:59:21.258 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:59:21.258 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:59:21.258 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 13:59:03.409315499 +0000 2026-03-09T13:59:21.258 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 13:41:22.884638261 +0000 2026-03-09T13:59:21.258 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 13:41:22.884638261 +0000 2026-03-09T13:59:21.258 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 13:38:20.236000000 +0000 2026-03-09T13:59:21.258 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T13:59:21.317 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T13:59:21.317 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T13:59:21.317 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.00105349 s, 486 kB/s 2026-03-09T13:59:21.319 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T13:59:21.361 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdd 2026-03-09T13:59:21.422 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdd 2026-03-09T13:59:21.422 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:59:21.422 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-09T13:59:21.422 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:59:21.422 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:59:21.422 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 13:59:03.463315558 +0000 2026-03-09T13:59:21.422 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 13:41:22.878638249 +0000 2026-03-09T13:59:21.422 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 13:41:22.878638249 +0000 2026-03-09T13:59:21.422 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 13:38:20.245000000 +0000 2026-03-09T13:59:21.422 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T13:59:21.549 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T13:59:21.549 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T13:59:21.549 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000204293 s, 2.5 MB/s 2026-03-09T13:59:21.550 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T13:59:21.614 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:21 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:21.610+0000 7f476dd16140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T13:59:21.624 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vde 2026-03-09T13:59:21.663 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vde 2026-03-09T13:59:21.663 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-09T13:59:21.663 INFO:teuthology.orchestra.run.vm04.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-09T13:59:21.663 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T13:59:21.663 INFO:teuthology.orchestra.run.vm04.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-09T13:59:21.663 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 13:59:03.501315599 +0000 2026-03-09T13:59:21.663 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 13:41:22.863638221 +0000 2026-03-09T13:59:21.663 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 13:41:22.863638221 +0000 2026-03-09T13:59:21.663 INFO:teuthology.orchestra.run.vm04.stdout: Birth: 2026-03-09 13:38:20.251000000 +0000 2026-03-09T13:59:21.663 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T13:59:21.728 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T13:59:21.728 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T13:59:21.728 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000163536 s, 3.1 MB/s 2026-03-09T13:59:21.729 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T13:59:21.787 INFO:tasks.cephadm:Deploying osd.0 on vm03 with /dev/vde... 2026-03-09T13:59:21.787 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- lvm zap /dev/vde 2026-03-09T13:59:21.792 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 13:59:21 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T13:59:21.444+0000 7f6557736640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-09T13:59:21.997 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: Deploying daemon mgr.x on vm04 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: Deploying daemon mgr.x on vm04 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:22.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:22.062 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.062 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.062 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T13:59:22.062 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T13:59:22.062 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:22.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: Deploying daemon mgr.x on vm04 2026-03-09T13:59:22.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:21 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:21.930+0000 7f476dd16140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: from numpy import show_config as show_numpy_config 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:22.020+0000 7f476dd16140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:22.059+0000 7f476dd16140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T13:59:22.241 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:22.131+0000 7f476dd16140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T13:59:22.786 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:59:22.804 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch daemon add osd vm03:/dev/vde 2026-03-09T13:59:22.906 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:22.619+0000 7f476dd16140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T13:59:22.906 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:22.728+0000 7f476dd16140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:59:22.906 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:22.768+0000 7f476dd16140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T13:59:22.906 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:22.803+0000 7f476dd16140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T13:59:22.906 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:22.865+0000 7f476dd16140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T13:59:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[52586]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[52586]: Reconfiguring mon.a (monmap changed)... 2026-03-09T13:59:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[52586]: Reconfiguring daemon mon.a on vm03 2026-03-09T13:59:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[52586]: Reconfiguring mgr.y (unknown last config time)... 2026-03-09T13:59:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[52586]: Reconfiguring daemon mgr.y on vm03 2026-03-09T13:59:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:23.067 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:23.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:23 vm04 ceph-mon[54203]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:23.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:23 vm04 ceph-mon[54203]: Reconfiguring mon.a (monmap changed)... 2026-03-09T13:59:23.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:23 vm04 ceph-mon[54203]: Reconfiguring daemon mon.a on vm03 2026-03-09T13:59:23.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:23 vm04 ceph-mon[54203]: Reconfiguring mgr.y (unknown last config time)... 2026-03-09T13:59:23.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:23 vm04 ceph-mon[54203]: Reconfiguring daemon mgr.y on vm03 2026-03-09T13:59:23.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:23.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:23.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:23.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:23.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:23.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:23.241 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:22.903+0000 7f476dd16140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T13:59:23.241 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:23 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:23.086+0000 7f476dd16140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T13:59:23.241 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:23 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:23.145+0000 7f476dd16140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T13:59:23.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[58994]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:23.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[58994]: Reconfiguring mon.a (monmap changed)... 2026-03-09T13:59:23.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[58994]: Reconfiguring daemon mon.a on vm03 2026-03-09T13:59:23.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[58994]: Reconfiguring mgr.y (unknown last config time)... 2026-03-09T13:59:23.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[58994]: Reconfiguring daemon mgr.y on vm03 2026-03-09T13:59:23.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:23.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:23.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:23.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:23.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:23.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:23.700 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:23 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:23.378+0000 7f476dd16140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T13:59:23.700 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:23 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:23.697+0000 7f476dd16140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T13:59:23.965 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:23 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:23.734+0000 7f476dd16140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T13:59:23.965 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:23 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:23.775+0000 7f476dd16140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T13:59:23.965 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:23 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:23.851+0000 7f476dd16140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T13:59:23.965 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:23 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:23.886+0000 7f476dd16140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T13:59:23.965 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:23 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:23.962+0000 7f476dd16140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T13:59:24.185 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: Reconfiguring mon.c (monmap changed)... 2026-03-09T13:59:24.185 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: Reconfiguring daemon mon.c on vm03 2026-03-09T13:59:24.185 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: Reconfiguring mon.c (monmap changed)... 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: Reconfiguring daemon mon.c on vm03 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:24.186 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:24.077+0000 7f476dd16140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: Reconfiguring mon.c (monmap changed)... 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: Reconfiguring daemon mon.c on vm03 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:24.220 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:24.490 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:24.217+0000 7f476dd16140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T13:59:24.490 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 13:59:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T13:59:24.254+0000 7f476dd16140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: Reconfiguring mon.b (monmap changed)... 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: Reconfiguring daemon mon.b on vm04 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3923352005' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5c050d28-3a63-4c87-aafc-d7703eb5e579"}]: dispatch 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5c050d28-3a63-4c87-aafc-d7703eb5e579"}]: dispatch 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5c050d28-3a63-4c87-aafc-d7703eb5e579"}]': finished 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: osdmap e5: 1 total, 0 up, 1 in 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: Standby manager daemon x started 2026-03-09T13:59:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:25 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1051509365' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: Reconfiguring mon.b (monmap changed)... 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: Reconfiguring daemon mon.b on vm04 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3923352005' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5c050d28-3a63-4c87-aafc-d7703eb5e579"}]: dispatch 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5c050d28-3a63-4c87-aafc-d7703eb5e579"}]: dispatch 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5c050d28-3a63-4c87-aafc-d7703eb5e579"}]': finished 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: osdmap e5: 1 total, 0 up, 1 in 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T13:59:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: Standby manager daemon x started 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1051509365' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: Reconfiguring mon.b (monmap changed)... 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: Reconfiguring daemon mon.b on vm04 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3923352005' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5c050d28-3a63-4c87-aafc-d7703eb5e579"}]: dispatch 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5c050d28-3a63-4c87-aafc-d7703eb5e579"}]: dispatch 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5c050d28-3a63-4c87-aafc-d7703eb5e579"}]': finished 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: osdmap e5: 1 total, 0 up, 1 in 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: from='mgr.? 192.168.123.104:0/2363558158' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: Standby manager daemon x started 2026-03-09T13:59:25.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:25 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1051509365' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:26.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:26 vm04 ceph-mon[54203]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-09T13:59:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:26 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T13:59:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:26 vm03 ceph-mon[52586]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-09T13:59:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:26 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T13:59:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:26 vm03 ceph-mon[58994]: mgrmap e14: y(active, since 29s), standbys: x 2026-03-09T13:59:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:26 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T13:59:27.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:27 vm03 ceph-mon[52586]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:27.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:27 vm03 ceph-mon[58994]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:27.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:27 vm04 ceph-mon[54203]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:28.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:28 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T13:59:28.156 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:28 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T13:59:28.156 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:28 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:28.429 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:28 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:28.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:28 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T13:59:28.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:28 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:29.159 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:29 vm03 ceph-mon[52586]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:29.159 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:29 vm03 ceph-mon[52586]: Deploying daemon osd.0 on vm03 2026-03-09T13:59:29.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:29 vm04 ceph-mon[54203]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:29.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:29 vm04 ceph-mon[54203]: Deploying daemon osd.0 on vm03 2026-03-09T13:59:29.493 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:29 vm03 ceph-mon[58994]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:29.493 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:29 vm03 ceph-mon[58994]: Deploying daemon osd.0 on vm03 2026-03-09T13:59:30.167 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:30 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:30.167 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:30 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:30.167 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:30 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:30.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:30 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:30.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:30 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:30.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:30 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:30.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:30 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:30.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:30 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:30.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:30 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.150 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 0 on host 'vm03' 2026-03-09T13:59:31.207 DEBUG:teuthology.orchestra.run.vm03:osd.0> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.0.service 2026-03-09T13:59:31.208 INFO:tasks.cephadm:Deploying osd.1 on vm03 with /dev/vdd... 2026-03-09T13:59:31.209 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- lvm zap /dev/vdd 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[52586]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[58994]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.433 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:31 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.490 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:31.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:31 vm04 ceph-mon[54203]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:31.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:31 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:31 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:31 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:31.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:31 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:31.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:31 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:31 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:31.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:31 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:31.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:31 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:32.032 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 13:59:31 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0[62907]: 2026-03-09T13:59:31.689+0000 7f97eace4740 -1 osd.0 0 log_to_monitors true 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[52586]: from='osd.0 v2:192.168.123.103:6801/2121486584' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[52586]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[58994]: from='osd.0 v2:192.168.123.103:6801/2121486584' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[58994]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:32.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:32 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:32.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:32 vm04 ceph-mon[54203]: from='osd.0 v2:192.168.123.103:6801/2121486584' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T13:59:32.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:32 vm04 ceph-mon[54203]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T13:59:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:32 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:32 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:32 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:32 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:32 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:32 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:32.864 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:59:32.878 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch daemon add osd vm03:/dev/vdd 2026-03-09T13:59:33.048 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:33.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:33 vm04 ceph-mon[54203]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:33.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:33 vm04 ceph-mon[54203]: Detected new or changed devices on vm03 2026-03-09T13:59:33.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:33 vm04 ceph-mon[54203]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T13:59:33.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:33 vm04 ceph-mon[54203]: from='osd.0 v2:192.168.123.103:6801/2121486584' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:33.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:33 vm04 ceph-mon[54203]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T13:59:33.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:33 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:33.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:33 vm04 ceph-mon[54203]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:33.532 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[52586]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:33.532 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[52586]: Detected new or changed devices on vm03 2026-03-09T13:59:33.532 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[52586]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T13:59:33.532 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[52586]: from='osd.0 v2:192.168.123.103:6801/2121486584' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:33.532 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[52586]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T13:59:33.532 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:33.532 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[52586]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:33.535 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[58994]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:33.535 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[58994]: Detected new or changed devices on vm03 2026-03-09T13:59:33.535 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[58994]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T13:59:33.535 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[58994]: from='osd.0 v2:192.168.123.103:6801/2121486584' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:33.535 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[58994]: osdmap e6: 1 total, 0 up, 1 in 2026-03-09T13:59:33.535 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:33.535 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:33 vm03 ceph-mon[58994]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:34.201 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 13:59:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0[62907]: 2026-03-09T13:59:34.138+0000 7f97e7478640 -1 osd.0 0 waiting for initial osdmap 2026-03-09T13:59:34.201 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 13:59:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0[62907]: 2026-03-09T13:59:34.144+0000 7f97e228e640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T13:59:34.201 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[52586]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T13:59:34.201 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[52586]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T13:59:34.201 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:34 vm04 ceph-mon[54203]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T13:59:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:34 vm04 ceph-mon[54203]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T13:59:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:34 vm04 ceph-mon[54203]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:34 vm04 ceph-mon[54203]: from='client.24130 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:34 vm04 ceph-mon[54203]: from='osd.0 ' entity='osd.0' 2026-03-09T13:59:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[52586]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[52586]: from='client.24130 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[52586]: from='osd.0 ' entity='osd.0' 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[58994]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[58994]: osdmap e7: 1 total, 0 up, 1 in 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[58994]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[58994]: from='client.24130 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[58994]: from='osd.0 ' entity='osd.0' 2026-03-09T13:59:34.511 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:35.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:35 vm04 ceph-mon[54203]: purged_snaps scrub starts 2026-03-09T13:59:35.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:35 vm04 ceph-mon[54203]: purged_snaps scrub ok 2026-03-09T13:59:35.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:35 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3432107934' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0d835e0-d8bd-405c-99e9-38882318aaa8"}]: dispatch 2026-03-09T13:59:35.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:35 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0d835e0-d8bd-405c-99e9-38882318aaa8"}]: dispatch 2026-03-09T13:59:35.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:35 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b0d835e0-d8bd-405c-99e9-38882318aaa8"}]': finished 2026-03-09T13:59:35.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:35 vm04 ceph-mon[54203]: osd.0 v2:192.168.123.103:6801/2121486584 boot 2026-03-09T13:59:35.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:35 vm04 ceph-mon[54203]: osdmap e8: 2 total, 1 up, 2 in 2026-03-09T13:59:35.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:35 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:35.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:35 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:35.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:35 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3545646141' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:35.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[52586]: purged_snaps scrub starts 2026-03-09T13:59:35.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[52586]: purged_snaps scrub ok 2026-03-09T13:59:35.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3432107934' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0d835e0-d8bd-405c-99e9-38882318aaa8"}]: dispatch 2026-03-09T13:59:35.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0d835e0-d8bd-405c-99e9-38882318aaa8"}]: dispatch 2026-03-09T13:59:35.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b0d835e0-d8bd-405c-99e9-38882318aaa8"}]': finished 2026-03-09T13:59:35.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[52586]: osd.0 v2:192.168.123.103:6801/2121486584 boot 2026-03-09T13:59:35.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[52586]: osdmap e8: 2 total, 1 up, 2 in 2026-03-09T13:59:35.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:35.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:35.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3545646141' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:35.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[58994]: purged_snaps scrub starts 2026-03-09T13:59:35.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[58994]: purged_snaps scrub ok 2026-03-09T13:59:35.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3432107934' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0d835e0-d8bd-405c-99e9-38882318aaa8"}]: dispatch 2026-03-09T13:59:35.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b0d835e0-d8bd-405c-99e9-38882318aaa8"}]: dispatch 2026-03-09T13:59:35.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b0d835e0-d8bd-405c-99e9-38882318aaa8"}]': finished 2026-03-09T13:59:35.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[58994]: osd.0 v2:192.168.123.103:6801/2121486584 boot 2026-03-09T13:59:35.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[58994]: osdmap e8: 2 total, 1 up, 2 in 2026-03-09T13:59:35.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T13:59:35.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:35.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:35 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3545646141' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:36.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:36 vm03 ceph-mon[52586]: pgmap v18: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:36.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:36 vm03 ceph-mon[52586]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T13:59:36.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:36 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:36.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:36 vm03 ceph-mon[58994]: pgmap v18: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:36.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:36 vm03 ceph-mon[58994]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T13:59:36.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:36 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:36 vm04 ceph-mon[54203]: pgmap v18: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:36 vm04 ceph-mon[54203]: osdmap e9: 2 total, 1 up, 2 in 2026-03-09T13:59:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:36 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:38.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:38 vm03 ceph-mon[52586]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:38.585 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:38 vm03 ceph-mon[58994]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:38.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:38 vm04 ceph-mon[54203]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:39.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:39 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T13:59:39.544 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:39 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:39.544 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:39 vm03 ceph-mon[52586]: Deploying daemon osd.1 on vm03 2026-03-09T13:59:39.544 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:39 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T13:59:39.544 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:39 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:39.544 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:39 vm03 ceph-mon[58994]: Deploying daemon osd.1 on vm03 2026-03-09T13:59:39.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:39 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T13:59:39.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:39 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:39.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:39 vm04 ceph-mon[54203]: Deploying daemon osd.1 on vm03 2026-03-09T13:59:40.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:40 vm04 ceph-mon[54203]: pgmap v21: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:40.544 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:40 vm03 ceph-mon[58994]: pgmap v21: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:40.544 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:40 vm03 ceph-mon[52586]: pgmap v21: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:41.581 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:41 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.662 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 1 on host 'vm03' 2026-03-09T13:59:41.719 DEBUG:teuthology.orchestra.run.vm03:osd.1> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.1.service 2026-03-09T13:59:41.720 INFO:tasks.cephadm:Deploying osd.2 on vm03 with /dev/vdc... 2026-03-09T13:59:41.720 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- lvm zap /dev/vdc 2026-03-09T13:59:41.740 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:41 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:41.740 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:41 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.740 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:41 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:41 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:41 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:41 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:41.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:41 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:41.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:41 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:41.997 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:42.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:42 vm03 ceph-mon[52586]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:42.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:42 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:42.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:42 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:42.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:42 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:42.797 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:42 vm03 ceph-mon[52586]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T13:59:42.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:42 vm03 ceph-mon[58994]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:42.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:42 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:42.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:42 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:42.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:42 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:42.797 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:42 vm03 ceph-mon[58994]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T13:59:42.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:42 vm04 ceph-mon[54203]: pgmap v22: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:42.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:42 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:42.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:42 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:42.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:42 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:42.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:42 vm04 ceph-mon[54203]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T13:59:43.360 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:59:43.376 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch daemon add osd vm03:/dev/vdc 2026-03-09T13:59:43.548 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:43.893 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 13:59:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1[67912]: 2026-03-09T13:59:43.664+0000 7f6fc6661640 -1 osd.1 0 waiting for initial osdmap 2026-03-09T13:59:43.893 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 13:59:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1[67912]: 2026-03-09T13:59:43.674+0000 7f6fc1c78640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T13:59:43.894 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[58994]: Detected new or changed devices on vm03 2026-03-09T13:59:43.894 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:43.894 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:43.894 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:43.894 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:43.894 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:43.894 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:43.894 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[58994]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T13:59:43.894 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[58994]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T13:59:43.894 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[58994]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:43.894 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:43.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[52586]: Detected new or changed devices on vm03 2026-03-09T13:59:43.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:43.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:43.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:43.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:43.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:43.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:43.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[52586]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T13:59:43.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[52586]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T13:59:43.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[52586]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:43.895 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:43.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:43 vm04 ceph-mon[54203]: Detected new or changed devices on vm03 2026-03-09T13:59:43.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:43.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:43.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:43.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:43.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:43.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:43.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:43 vm04 ceph-mon[54203]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T13:59:43.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:43 vm04 ceph-mon[54203]: osdmap e10: 2 total, 1 up, 2 in 2026-03-09T13:59:43.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:43 vm04 ceph-mon[54203]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:43.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: purged_snaps scrub starts 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: purged_snaps scrub ok 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: pgmap v24: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4241828370' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9f582930-68f3-4f39-9077-1b35b670203b"}]: dispatch 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9f582930-68f3-4f39-9077-1b35b670203b"}]: dispatch 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9f582930-68f3-4f39-9077-1b35b670203b"}]': finished 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: osd.1 v2:192.168.123.103:6805/4232373287 boot 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: osdmap e12: 3 total, 2 up, 3 in 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:44.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: purged_snaps scrub starts 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: purged_snaps scrub ok 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: pgmap v24: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4241828370' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9f582930-68f3-4f39-9077-1b35b670203b"}]: dispatch 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9f582930-68f3-4f39-9077-1b35b670203b"}]: dispatch 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9f582930-68f3-4f39-9077-1b35b670203b"}]': finished 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: osd.1 v2:192.168.123.103:6805/4232373287 boot 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: osdmap e12: 3 total, 2 up, 3 in 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:45.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: purged_snaps scrub starts 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: purged_snaps scrub ok 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: pgmap v24: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='osd.1 v2:192.168.123.103:6805/4232373287' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: osdmap e11: 2 total, 1 up, 2 in 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4241828370' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9f582930-68f3-4f39-9077-1b35b670203b"}]: dispatch 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9f582930-68f3-4f39-9077-1b35b670203b"}]: dispatch 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9f582930-68f3-4f39-9077-1b35b670203b"}]': finished 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: osd.1 v2:192.168.123.103:6805/4232373287 boot 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: osdmap e12: 3 total, 2 up, 3 in 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T13:59:45.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:45.990 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:45 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3329694525' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:45.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:45 vm04 ceph-mon[54203]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T13:59:45.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:45 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:46.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:45 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3329694525' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:46.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:45 vm03 ceph-mon[58994]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T13:59:46.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:45 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:46.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:45 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3329694525' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:46.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:45 vm03 ceph-mon[52586]: osdmap e13: 3 total, 2 up, 3 in 2026-03-09T13:59:46.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:45 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:46.990 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:46 vm04 ceph-mon[54203]: pgmap v27: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:47.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:46 vm03 ceph-mon[52586]: pgmap v27: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:47.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:46 vm03 ceph-mon[58994]: pgmap v27: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:48.910 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:48 vm03 ceph-mon[52586]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:48.910 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:48 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T13:59:48.910 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:48 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:48.910 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:48 vm03 ceph-mon[58994]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:48.910 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:48 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T13:59:48.910 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:48 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:48.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:48 vm04 ceph-mon[54203]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:48.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:48 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T13:59:48.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:48 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:49.963 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:49 vm03 ceph-mon[52586]: Deploying daemon osd.2 on vm03 2026-03-09T13:59:49.964 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:49 vm03 ceph-mon[58994]: Deploying daemon osd.2 on vm03 2026-03-09T13:59:49.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:49 vm04 ceph-mon[54203]: Deploying daemon osd.2 on vm03 2026-03-09T13:59:50.687 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:50 vm03 ceph-mon[52586]: pgmap v30: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:50.687 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:50 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:50.687 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:50 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:50.687 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:50 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:50.688 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:50 vm03 ceph-mon[58994]: pgmap v30: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:50.688 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:50 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:50.688 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:50 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:50.688 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:50 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:50.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:50 vm04 ceph-mon[54203]: pgmap v30: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:50.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:50 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:50.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:50 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:50.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:50 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:51.706 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 2 on host 'vm03' 2026-03-09T13:59:51.778 DEBUG:teuthology.orchestra.run.vm03:osd.2> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.2.service 2026-03-09T13:59:51.779 INFO:tasks.cephadm:Deploying osd.3 on vm03 with /dev/vdb... 2026-03-09T13:59:51.780 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- lvm zap /dev/vdb 2026-03-09T13:59:52.064 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:52.577 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[52586]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:52.577 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.577 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.577 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:52.577 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:52.577 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.577 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:52.577 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.577 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.578 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[58994]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:52.578 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.578 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.578 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:52.578 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:52.578 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.578 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:52.578 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.578 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:52 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:52 vm04 ceph-mon[54203]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:52 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:52 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:52 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:52 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:52 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:52 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T13:59:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:52 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:52 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:52.956 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 13:59:52 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2[73032]: 2026-03-09T13:59:52.707+0000 7f83f0fa1740 -1 osd.2 0 log_to_monitors true 2026-03-09T13:59:53.339 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T13:59:53.359 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch daemon add osd vm03:/dev/vdb 2026-03-09T13:59:53.530 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T13:59:53.970 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[58994]: Detected new or changed devices on vm03 2026-03-09T13:59:53.970 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:53.970 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:53.970 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:53.970 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:53.970 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:53.970 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:53.970 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[58994]: from='osd.2 v2:192.168.123.103:6809/872739083' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T13:59:53.970 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[58994]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T13:59:53.970 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[52586]: Detected new or changed devices on vm03 2026-03-09T13:59:53.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:53.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:53.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:53.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:53.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:53.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:53.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[52586]: from='osd.2 v2:192.168.123.103:6809/872739083' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T13:59:53.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:53 vm03 ceph-mon[52586]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T13:59:53.990 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:53 vm04 ceph-mon[54203]: Detected new or changed devices on vm03 2026-03-09T13:59:53.990 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:53 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:53 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:53 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T13:59:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:53 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:53 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T13:59:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:53 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T13:59:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:53 vm04 ceph-mon[54203]: from='osd.2 v2:192.168.123.103:6809/872739083' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T13:59:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:53 vm04 ceph-mon[54203]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='osd.2 v2:192.168.123.103:6809/872739083' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1602230410' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d0a3a4c-94c7-4259-a288-b3e930c3faf3"}]: dispatch 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1602230410' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5d0a3a4c-94c7-4259-a288-b3e930c3faf3"}]': finished 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: osdmap e15: 4 total, 2 up, 4 in 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:54 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='osd.2 v2:192.168.123.103:6809/872739083' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1602230410' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d0a3a4c-94c7-4259-a288-b3e930c3faf3"}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1602230410' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5d0a3a4c-94c7-4259-a288-b3e930c3faf3"}]': finished 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: osdmap e15: 4 total, 2 up, 4 in 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: pgmap v32: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: osdmap e14: 3 total, 2 up, 3 in 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='osd.2 v2:192.168.123.103:6809/872739083' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1602230410' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d0a3a4c-94c7-4259-a288-b3e930c3faf3"}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1602230410' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5d0a3a4c-94c7-4259-a288-b3e930c3faf3"}]': finished 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: osdmap e15: 4 total, 2 up, 4 in 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:55.020 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:54 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:55.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:55 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3999993700' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:55.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:55 vm03 ceph-mon[52586]: from='osd.2 ' entity='osd.2' 2026-03-09T13:59:55.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:55 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:55.792 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 13:59:55 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2[73032]: 2026-03-09T13:59:55.504+0000 7f83ed735640 -1 osd.2 0 waiting for initial osdmap 2026-03-09T13:59:55.792 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 13:59:55 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2[73032]: 2026-03-09T13:59:55.513+0000 7f83e854b640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T13:59:55.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:55 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3999993700' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:55.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:55 vm03 ceph-mon[58994]: from='osd.2 ' entity='osd.2' 2026-03-09T13:59:55.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:55 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:55.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:55 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3999993700' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T13:59:55.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:55 vm04 ceph-mon[54203]: from='osd.2 ' entity='osd.2' 2026-03-09T13:59:55.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:55 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:56.990 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:56 vm04 ceph-mon[54203]: purged_snaps scrub starts 2026-03-09T13:59:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:56 vm04 ceph-mon[54203]: purged_snaps scrub ok 2026-03-09T13:59:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:56 vm04 ceph-mon[54203]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:56 vm04 ceph-mon[54203]: osd.2 v2:192.168.123.103:6809/872739083 boot 2026-03-09T13:59:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:56 vm04 ceph-mon[54203]: osdmap e16: 4 total, 3 up, 4 in 2026-03-09T13:59:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:56 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:56 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[52586]: purged_snaps scrub starts 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[52586]: purged_snaps scrub ok 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[52586]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[52586]: osd.2 v2:192.168.123.103:6809/872739083 boot 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[52586]: osdmap e16: 4 total, 3 up, 4 in 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[58994]: purged_snaps scrub starts 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[58994]: purged_snaps scrub ok 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[58994]: pgmap v35: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[58994]: osd.2 v2:192.168.123.103:6809/872739083 boot 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[58994]: osdmap e16: 4 total, 3 up, 4 in 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T13:59:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:56 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:57.963 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:57 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T13:59:57.963 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:57 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T13:59:57.963 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:57 vm03 ceph-mon[58994]: osdmap e17: 4 total, 3 up, 4 in 2026-03-09T13:59:57.963 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:57 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:57.963 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:57 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T13:59:57.965 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:57 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T13:59:57.965 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:57 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T13:59:57.965 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:57 vm03 ceph-mon[52586]: osdmap e17: 4 total, 3 up, 4 in 2026-03-09T13:59:57.965 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:57 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:57.965 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:57 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T13:59:57.990 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:57 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T13:59:57.990 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:57 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T13:59:57.990 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:57 vm04 ceph-mon[54203]: osdmap e17: 4 total, 3 up, 4 in 2026-03-09T13:59:57.990 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:57 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:57.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:57 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T13:59:58.897 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:58 vm03 ceph-mon[52586]: pgmap v37: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T13:59:58.897 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:58 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T13:59:58.897 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:58 vm03 ceph-mon[52586]: osdmap e18: 4 total, 3 up, 4 in 2026-03-09T13:59:58.897 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:58 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:58.897 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:58 vm03 ceph-mon[58994]: pgmap v37: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T13:59:58.897 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:58 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T13:59:58.897 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:58 vm03 ceph-mon[58994]: osdmap e18: 4 total, 3 up, 4 in 2026-03-09T13:59:58.897 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:58 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:58.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:58 vm04 ceph-mon[54203]: pgmap v37: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T13:59:58.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:58 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T13:59:58.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:58 vm04 ceph-mon[54203]: osdmap e18: 4 total, 3 up, 4 in 2026-03-09T13:59:58.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:58 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T13:59:59.447 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77681]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdd 2026-03-09T13:59:59.448 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77681]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T13:59:59.448 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77681]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T13:59:59.448 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77681]: pam_unix(sudo:session): session closed for user root 2026-03-09T13:59:59.448 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77670]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vde 2026-03-09T13:59:59.448 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77670]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T13:59:59.448 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77670]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T13:59:59.448 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77670]: pam_unix(sudo:session): session closed for user root 2026-03-09T13:59:59.448 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77687]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vdc 2026-03-09T13:59:59.448 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77687]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T13:59:59.448 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77687]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T13:59:59.448 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77687]: pam_unix(sudo:session): session closed for user root 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 sudo[56585]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 sudo[56585]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 sudo[56585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 sudo[56585]: pam_unix(sudo:session): session closed for user root 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: Deploying daemon osd.3 on vm03 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:59.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 13:59:59 vm04 ceph-mon[54203]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T13:59:59.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77691]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T13:59:59.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77691]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T13:59:59.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T13:59:59.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77691]: pam_unix(sudo:session): session closed for user root 2026-03-09T13:59:59.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T13:59:59.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:59.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: Deploying daemon osd.3 on vm03 2026-03-09T13:59:59.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[52586]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77695]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77695]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 sudo[77695]: pam_unix(sudo:session): session closed for user root 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: Deploying daemon osd.3 on vm03 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T13:59:59.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 13:59:59 vm03 ceph-mon[58994]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T14:00:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:00 vm04 ceph-mon[54203]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:00 vm04 ceph-mon[54203]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T14:00:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:00 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:00 vm04 ceph-mon[54203]: overall HEALTH_OK 2026-03-09T14:00:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:00 vm03 ceph-mon[52586]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:00 vm03 ceph-mon[52586]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T14:00:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:00 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:00 vm03 ceph-mon[52586]: overall HEALTH_OK 2026-03-09T14:00:01.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:00 vm03 ceph-mon[58994]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:01.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:00 vm03 ceph-mon[58994]: osdmap e19: 4 total, 3 up, 4 in 2026-03-09T14:00:01.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:00 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:01.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:00 vm03 ceph-mon[58994]: overall HEALTH_OK 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[52586]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[52586]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[58994]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[58994]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.218 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:02 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:02 vm04 ceph-mon[54203]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:02 vm04 ceph-mon[54203]: mgrmap e15: y(active, since 66s), standbys: x 2026-03-09T14:00:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:02 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:02 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:02 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:02 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:02 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.323 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 3 on host 'vm03' 2026-03-09T14:00:03.388 DEBUG:teuthology.orchestra.run.vm03:osd.3> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.3.service 2026-03-09T14:00:03.390 INFO:tasks.cephadm:Deploying osd.4 on vm04 with /dev/vde... 2026-03-09T14:00:03.390 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- lvm zap /dev/vde 2026-03-09T14:00:03.581 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:03.983 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:03 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:03.983 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:03 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:03.983 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:03 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.983 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:03 vm04 ceph-mon[54203]: from='osd.3 v2:192.168.123.103:6813/2851532553' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:00:03.983 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:03 vm04 ceph-mon[54203]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:00:03.983 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:03 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:03.983 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:03 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:03.983 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:03 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:04.168 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:04.168 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:04.168 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:04.168 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[52586]: from='osd.3 v2:192.168.123.103:6813/2851532553' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:00:04.168 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[52586]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:00:04.168 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:04.168 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:04.168 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:04.168 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:04.169 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:04.169 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:04.169 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[58994]: from='osd.3 v2:192.168.123.103:6813/2851532553' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:00:04.169 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[58994]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-09T14:00:04.169 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:04.169 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:04.169 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:03 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:04.352 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:00:04.376 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch daemon add osd vm04:/dev/vde 2026-03-09T14:00:04.554 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='osd.3 v2:192.168.123.103:6813/2851532553' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:05.159 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:04 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='osd.3 v2:192.168.123.103:6813/2851532553' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: osdmap e20: 4 total, 3 up, 4 in 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='osd.3 v2:192.168.123.103:6813/2851532553' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:05.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:04 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: Detected new or changed devices on vm03 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/1326863548' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1227a88f-b360-42a6-a96c-5fc1f52a1fbc"}]: dispatch 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1227a88f-b360-42a6-a96c-5fc1f52a1fbc"}]: dispatch 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1227a88f-b360-42a6-a96c-5fc1f52a1fbc"}]': finished 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: osdmap e22: 5 total, 3 up, 5 in 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:05 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: Detected new or changed devices on vm03 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/1326863548' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1227a88f-b360-42a6-a96c-5fc1f52a1fbc"}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1227a88f-b360-42a6-a96c-5fc1f52a1fbc"}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1227a88f-b360-42a6-a96c-5fc1f52a1fbc"}]': finished 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: osdmap e22: 5 total, 3 up, 5 in 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: Detected new or changed devices on vm03 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: osdmap e21: 4 total, 3 up, 4 in 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/1326863548' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1227a88f-b360-42a6-a96c-5fc1f52a1fbc"}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1227a88f-b360-42a6-a96c-5fc1f52a1fbc"}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1227a88f-b360-42a6-a96c-5fc1f52a1fbc"}]': finished 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: osdmap e22: 5 total, 3 up, 5 in 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:05 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:06.293 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:00:06 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3[78109]: 2026-03-09T14:00:06.102+0000 7faa0a1f9640 -1 osd.3 0 waiting for initial osdmap 2026-03-09T14:00:06.293 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:00:06 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3[78109]: 2026-03-09T14:00:06.111+0000 7faa05822640 -1 osd.3 22 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:00:07.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:06 vm04 ceph-mon[54203]: purged_snaps scrub starts 2026-03-09T14:00:07.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:06 vm04 ceph-mon[54203]: purged_snaps scrub ok 2026-03-09T14:00:07.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:06 vm04 ceph-mon[54203]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:07.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:06 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/1052334398' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:07.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:06 vm04 ceph-mon[54203]: from='osd.3 ' entity='osd.3' 2026-03-09T14:00:07.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:06 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[52586]: purged_snaps scrub starts 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[52586]: purged_snaps scrub ok 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[52586]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/1052334398' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[52586]: from='osd.3 ' entity='osd.3' 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[58994]: purged_snaps scrub starts 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[58994]: purged_snaps scrub ok 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[58994]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/1052334398' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[58994]: from='osd.3 ' entity='osd.3' 2026-03-09T14:00:07.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:08.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:08 vm04 ceph-mon[54203]: osd.3 v2:192.168.123.103:6813/2851532553 boot 2026-03-09T14:00:08.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:08 vm04 ceph-mon[54203]: osdmap e23: 5 total, 4 up, 5 in 2026-03-09T14:00:08.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:08 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:08.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:08 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:08.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:08 vm03 ceph-mon[52586]: osd.3 v2:192.168.123.103:6813/2851532553 boot 2026-03-09T14:00:08.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:08 vm03 ceph-mon[52586]: osdmap e23: 5 total, 4 up, 5 in 2026-03-09T14:00:08.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:08 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:08.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:08 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:08.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:08 vm03 ceph-mon[58994]: osd.3 v2:192.168.123.103:6813/2851532553 boot 2026-03-09T14:00:08.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:08 vm03 ceph-mon[58994]: osdmap e23: 5 total, 4 up, 5 in 2026-03-09T14:00:08.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:08 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:00:08.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:08 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:09.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:09 vm04 ceph-mon[54203]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:09.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:09 vm04 ceph-mon[54203]: osdmap e24: 5 total, 4 up, 5 in 2026-03-09T14:00:09.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:09 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:09.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:09 vm03 ceph-mon[52586]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:09.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:09 vm03 ceph-mon[52586]: osdmap e24: 5 total, 4 up, 5 in 2026-03-09T14:00:09.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:09 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:09.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:09 vm03 ceph-mon[58994]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:09.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:09 vm03 ceph-mon[58994]: osdmap e24: 5 total, 4 up, 5 in 2026-03-09T14:00:09.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:09 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:10 vm04 ceph-mon[54203]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:00:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:10 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:10 vm04 ceph-mon[54203]: Deploying daemon osd.4 on vm04 2026-03-09T14:00:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:10 vm03 ceph-mon[52586]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:00:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:10 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:10 vm03 ceph-mon[52586]: Deploying daemon osd.4 on vm04 2026-03-09T14:00:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:10 vm03 ceph-mon[58994]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:10 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-09T14:00:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:10 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:10 vm03 ceph-mon[58994]: Deploying daemon osd.4 on vm04 2026-03-09T14:00:12.434 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:12 vm04 ceph-mon[54203]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:12.434 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:12.434 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:12.434 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:12 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:12 vm03 ceph-mon[58994]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:12 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:12 vm03 ceph-mon[52586]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:12 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:13.023 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 4 on host 'vm04' 2026-03-09T14:00:13.093 DEBUG:teuthology.orchestra.run.vm04:osd.4> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.4.service 2026-03-09T14:00:13.095 INFO:tasks.cephadm:Deploying osd.5 on vm04 with /dev/vdd... 2026-03-09T14:00:13.095 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- lvm zap /dev/vdd 2026-03-09T14:00:13.395 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:13.972 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:13 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:13.973 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:13 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:13.973 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:13 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:13.973 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:13 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:13.973 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:13 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:13.973 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:13 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:13.973 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:13 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:13.973 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:13 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:13 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:14.228 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:00:14 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4[58748]: 2026-03-09T14:00:14.094+0000 7ffb331ac740 -1 osd.4 0 log_to_monitors true 2026-03-09T14:00:14.845 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:00:14.864 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch daemon add osd vm04:/dev/vdd 2026-03-09T14:00:15.049 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: Detected new or changed devices on vm04 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: from='osd.4 v2:192.168.123.104:6800/288742704' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: Adjusting osd_memory_target on vm04 to 257.0M 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: Unable to set osd_memory_target on vm04 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:15.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: Detected new or changed devices on vm04 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: from='osd.4 v2:192.168.123.104:6800/288742704' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: Adjusting osd_memory_target on vm04 to 257.0M 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: Unable to set osd_memory_target on vm04 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:15.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:15 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: Detected new or changed devices on vm04 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: from='osd.4 v2:192.168.123.104:6800/288742704' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: Adjusting osd_memory_target on vm04 to 257.0M 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: Unable to set osd_memory_target on vm04 to 269530726: error parsing value: Value '269530726' is below minimum 939524096 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:15.489 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:15 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:16.168 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:16 vm04 ceph-mon[54203]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:00:16.168 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:16 vm04 ceph-mon[54203]: from='osd.4 v2:192.168.123.104:6800/288742704' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:16.168 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:16 vm04 ceph-mon[54203]: osdmap e25: 5 total, 4 up, 5 in 2026-03-09T14:00:16.168 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:16 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:16.168 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:16 vm04 ceph-mon[54203]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:16.168 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:16 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:16.169 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:16 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:16.169 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:16 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[52586]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[52586]: from='osd.4 v2:192.168.123.104:6800/288742704' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[52586]: osdmap e25: 5 total, 4 up, 5 in 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[52586]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[58994]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[58994]: from='osd.4 v2:192.168.123.104:6800/288742704' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[58994]: osdmap e25: 5 total, 4 up, 5 in 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[58994]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:16.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:16 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:17.241 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:00:16 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4[58748]: 2026-03-09T14:00:16.953+0000 7ffb2f12d640 -1 osd.4 0 waiting for initial osdmap 2026-03-09T14:00:17.241 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:00:16 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4[58748]: 2026-03-09T14:00:16.966+0000 7ffb2a756640 -1 osd.4 27 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:00:17.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:17.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: from='client.24208 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:17.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:17.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: osdmap e26: 5 total, 4 up, 5 in 2026-03-09T14:00:17.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:17.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:17.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/94897701' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "816af1b8-9560-416b-b784-dce6f7c9ca65"}]: dispatch 2026-03-09T14:00:17.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "816af1b8-9560-416b-b784-dce6f7c9ca65"}]: dispatch 2026-03-09T14:00:17.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "816af1b8-9560-416b-b784-dce6f7c9ca65"}]': finished 2026-03-09T14:00:17.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: osdmap e27: 6 total, 4 up, 6 in 2026-03-09T14:00:17.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:17.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:17.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/525211031' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:17.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:17 vm04 ceph-mon[54203]: from='osd.4 ' entity='osd.4' 2026-03-09T14:00:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: from='client.24208 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: osdmap e26: 5 total, 4 up, 5 in 2026-03-09T14:00:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/94897701' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "816af1b8-9560-416b-b784-dce6f7c9ca65"}]: dispatch 2026-03-09T14:00:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "816af1b8-9560-416b-b784-dce6f7c9ca65"}]: dispatch 2026-03-09T14:00:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "816af1b8-9560-416b-b784-dce6f7c9ca65"}]': finished 2026-03-09T14:00:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: osdmap e27: 6 total, 4 up, 6 in 2026-03-09T14:00:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/525211031' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[52586]: from='osd.4 ' entity='osd.4' 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: from='client.24208 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: osdmap e26: 5 total, 4 up, 5 in 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/94897701' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "816af1b8-9560-416b-b784-dce6f7c9ca65"}]: dispatch 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "816af1b8-9560-416b-b784-dce6f7c9ca65"}]: dispatch 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "816af1b8-9560-416b-b784-dce6f7c9ca65"}]': finished 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: osdmap e27: 6 total, 4 up, 6 in 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/525211031' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:17.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:17 vm03 ceph-mon[58994]: from='osd.4 ' entity='osd.4' 2026-03-09T14:00:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:18 vm04 ceph-mon[54203]: purged_snaps scrub starts 2026-03-09T14:00:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:18 vm04 ceph-mon[54203]: purged_snaps scrub ok 2026-03-09T14:00:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:18 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:18 vm04 ceph-mon[54203]: osd.4 v2:192.168.123.104:6800/288742704 boot 2026-03-09T14:00:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:18 vm04 ceph-mon[54203]: osdmap e28: 6 total, 5 up, 6 in 2026-03-09T14:00:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:18 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:18 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[52586]: purged_snaps scrub starts 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[52586]: purged_snaps scrub ok 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[52586]: osd.4 v2:192.168.123.104:6800/288742704 boot 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[52586]: osdmap e28: 6 total, 5 up, 6 in 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[58994]: purged_snaps scrub starts 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[58994]: purged_snaps scrub ok 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[58994]: osd.4 v2:192.168.123.104:6800/288742704 boot 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[58994]: osdmap e28: 6 total, 5 up, 6 in 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:00:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:18 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:19.337 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:19 vm04 ceph-mon[54203]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:19.337 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:19 vm04 ceph-mon[54203]: osdmap e29: 6 total, 5 up, 6 in 2026-03-09T14:00:19.337 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:19 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:19.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:19 vm03 ceph-mon[52586]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:19.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:19 vm03 ceph-mon[52586]: osdmap e29: 6 total, 5 up, 6 in 2026-03-09T14:00:19.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:19 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:19.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:19 vm03 ceph-mon[58994]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-09T14:00:19.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:19 vm03 ceph-mon[58994]: osdmap e29: 6 total, 5 up, 6 in 2026-03-09T14:00:19.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:19 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:20.140 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:20 vm04 ceph-mon[54203]: osdmap e30: 6 total, 5 up, 6 in 2026-03-09T14:00:20.140 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:20.140 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:00:20.140 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:20 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:20.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:20 vm03 ceph-mon[52586]: osdmap e30: 6 total, 5 up, 6 in 2026-03-09T14:00:20.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:20.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:00:20.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:20 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:20.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:20 vm03 ceph-mon[58994]: osdmap e30: 6 total, 5 up, 6 in 2026-03-09T14:00:20.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:20.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-09T14:00:20.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:20 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:21.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:21 vm04 ceph-mon[54203]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:00:21.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:21 vm04 ceph-mon[54203]: Deploying daemon osd.5 on vm04 2026-03-09T14:00:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:21 vm03 ceph-mon[52586]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:00:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:21 vm03 ceph-mon[52586]: Deploying daemon osd.5 on vm04 2026-03-09T14:00:21.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:21 vm03 ceph-mon[58994]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 133 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:00:21.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:21 vm03 ceph-mon[58994]: Deploying daemon osd.5 on vm04 2026-03-09T14:00:22.360 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:22.361 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:22.361 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:22 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:22.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:22.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:22.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:22 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:22.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:22.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:22.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:22 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:22.835 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 5 on host 'vm04' 2026-03-09T14:00:22.893 DEBUG:teuthology.orchestra.run.vm04:osd.5> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.5.service 2026-03-09T14:00:22.896 INFO:tasks.cephadm:Deploying osd.6 on vm04 with /dev/vdc... 2026-03-09T14:00:22.896 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- lvm zap /dev/vdc 2026-03-09T14:00:23.172 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:23.442 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:23 vm04 ceph-mon[54203]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 87 KiB/s, 0 objects/s recovering 2026-03-09T14:00:23.442 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.442 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.442 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:23.442 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:23.442 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.442 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:23.442 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.442 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:23 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[52586]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 87 KiB/s, 0 objects/s recovering 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[58994]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 87 KiB/s, 0 objects/s recovering 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:23 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:23.707 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:00:23 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:00:23.438+0000 7f39b5cee740 -1 osd.5 0 log_to_monitors true 2026-03-09T14:00:24.449 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:24 vm04 ceph-mon[54203]: from='osd.5 v2:192.168.123.104:6804/2731397521' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:00:24.449 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:24 vm04 ceph-mon[54203]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:00:24.449 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:24.449 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:24.449 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:24.449 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:24.449 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:24.449 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:24.449 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:24 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[52586]: from='osd.5 v2:192.168.123.104:6804/2731397521' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[52586]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[58994]: from='osd.5 v2:192.168.123.104:6804/2731397521' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[58994]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:24 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:24.546 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:00:24.561 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch daemon add osd vm04:/dev/vdc 2026-03-09T14:00:24.716 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 73 KiB/s, 0 objects/s recovering 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: Detected new or changed devices on vm04 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: Adjusting osd_memory_target on vm04 to 128.5M 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: Unable to set osd_memory_target on vm04 to 134765363: error parsing value: Value '134765363' is below minimum 939524096 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: from='osd.5 v2:192.168.123.104:6804/2731397521' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: osdmap e31: 6 total, 5 up, 6 in 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:25.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:25 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:25.491 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:00:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:00:25.175+0000 7f39b1c6f640 -1 osd.5 0 waiting for initial osdmap 2026-03-09T14:00:25.491 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:00:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:00:25.186+0000 7f39ada99640 -1 osd.5 32 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 73 KiB/s, 0 objects/s recovering 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: Detected new or changed devices on vm04 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: Adjusting osd_memory_target on vm04 to 128.5M 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: Unable to set osd_memory_target on vm04 to 134765363: error parsing value: Value '134765363' is below minimum 939524096 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: from='osd.5 v2:192.168.123.104:6804/2731397521' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: osdmap e31: 6 total, 5 up, 6 in 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 73 KiB/s, 0 objects/s recovering 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: Detected new or changed devices on vm04 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: Adjusting osd_memory_target on vm04 to 128.5M 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: Unable to set osd_memory_target on vm04 to 134765363: error parsing value: Value '134765363' is below minimum 939524096 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: from='osd.5 v2:192.168.123.104:6804/2731397521' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: osdmap e31: 6 total, 5 up, 6 in 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:25.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:25 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: from='client.24235 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: osdmap e32: 6 total, 5 up, 6 in 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/3840284906' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "25f29ca8-e401-49b8-826a-20452613ff7c"}]: dispatch 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "25f29ca8-e401-49b8-826a-20452613ff7c"}]: dispatch 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "25f29ca8-e401-49b8-826a-20452613ff7c"}]': finished 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: osd.5 v2:192.168.123.104:6804/2731397521 boot 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: osdmap e33: 7 total, 6 up, 7 in 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: osdmap e34: 7 total, 6 up, 7 in 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:26.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:26 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/649298688' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: from='client.24235 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: osdmap e32: 6 total, 5 up, 6 in 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/3840284906' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "25f29ca8-e401-49b8-826a-20452613ff7c"}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "25f29ca8-e401-49b8-826a-20452613ff7c"}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "25f29ca8-e401-49b8-826a-20452613ff7c"}]': finished 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: osd.5 v2:192.168.123.104:6804/2731397521 boot 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: osdmap e33: 7 total, 6 up, 7 in 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: osdmap e34: 7 total, 6 up, 7 in 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/649298688' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: from='client.24235 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: osdmap e32: 6 total, 5 up, 6 in 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/3840284906' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "25f29ca8-e401-49b8-826a-20452613ff7c"}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "25f29ca8-e401-49b8-826a-20452613ff7c"}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "25f29ca8-e401-49b8-826a-20452613ff7c"}]': finished 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: osd.5 v2:192.168.123.104:6804/2731397521 boot 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: osdmap e33: 7 total, 6 up, 7 in 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: osdmap e34: 7 total, 6 up, 7 in 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:26.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:26 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/649298688' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:27.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:27 vm04 ceph-mon[54203]: purged_snaps scrub starts 2026-03-09T14:00:27.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:27 vm04 ceph-mon[54203]: purged_snaps scrub ok 2026-03-09T14:00:27.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:27 vm04 ceph-mon[54203]: pgmap v67: 1 pgs: 1 unknown; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:00:27.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:27 vm04 ceph-mon[54203]: osdmap e35: 7 total, 6 up, 7 in 2026-03-09T14:00:27.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:27 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:27.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:27 vm03 ceph-mon[58994]: purged_snaps scrub starts 2026-03-09T14:00:27.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:27 vm03 ceph-mon[58994]: purged_snaps scrub ok 2026-03-09T14:00:27.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:27 vm03 ceph-mon[58994]: pgmap v67: 1 pgs: 1 unknown; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:00:27.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:27 vm03 ceph-mon[58994]: osdmap e35: 7 total, 6 up, 7 in 2026-03-09T14:00:27.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:27 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:27.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:27 vm03 ceph-mon[52586]: purged_snaps scrub starts 2026-03-09T14:00:27.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:27 vm03 ceph-mon[52586]: purged_snaps scrub ok 2026-03-09T14:00:27.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:27 vm03 ceph-mon[52586]: pgmap v67: 1 pgs: 1 unknown; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-09T14:00:27.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:27 vm03 ceph-mon[52586]: osdmap e35: 7 total, 6 up, 7 in 2026-03-09T14:00:27.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:27 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:29.434 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:29 vm04 ceph-mon[54203]: pgmap v71: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:00:29.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:29 vm03 ceph-mon[52586]: pgmap v71: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:00:29.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:29 vm03 ceph-mon[58994]: pgmap v71: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:00:30.366 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:30 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:00:30.366 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:30 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:30 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:00:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:30 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:30.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:30 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-09T14:00:30.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:30 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:31.355 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:31 vm04 ceph-mon[54203]: pgmap v72: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:00:31.355 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:31 vm04 ceph-mon[54203]: Deploying daemon osd.6 on vm04 2026-03-09T14:00:31.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:31 vm03 ceph-mon[52586]: pgmap v72: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:00:31.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:31 vm03 ceph-mon[52586]: Deploying daemon osd.6 on vm04 2026-03-09T14:00:31.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:31 vm03 ceph-mon[58994]: pgmap v72: 1 pgs: 1 peering; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-09T14:00:31.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:31 vm03 ceph-mon[58994]: Deploying daemon osd.6 on vm04 2026-03-09T14:00:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:32 vm04 ceph-mon[54203]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T14:00:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:32 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:32 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:32 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:32.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:32 vm03 ceph-mon[52586]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T14:00:32.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:32 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:32.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:32 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:32.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:32 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:32.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:32 vm03 ceph-mon[58994]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 75 KiB/s, 0 objects/s recovering 2026-03-09T14:00:32.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:32 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:32.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:32 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:32.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:32 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:32.641 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 6 on host 'vm04' 2026-03-09T14:00:32.695 DEBUG:teuthology.orchestra.run.vm04:osd.6> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.6.service 2026-03-09T14:00:32.696 INFO:tasks.cephadm:Deploying osd.7 on vm04 with /dev/vdb... 2026-03-09T14:00:32.696 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- lvm zap /dev/vdb 2026-03-09T14:00:32.938 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:33.524 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:33 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.525 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:33 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.525 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:33 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:33.525 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:33 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:33.525 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:33 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.525 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:33 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:33.525 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:33 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.525 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:33 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.776 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:00:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:00:33.520+0000 7f8ad9213740 -1 osd.6 0 log_to_monitors true 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:33.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:33 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:34.368 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:00:34.384 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch daemon add osd vm04:/dev/vdb 2026-03-09T14:00:34.547 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 59 KiB/s, 0 objects/s recovering 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: from='osd.6 v2:192.168.123.104:6808/1293006004' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: Detected new or changed devices on vm04 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: Adjusting osd_memory_target on vm04 to 87737k 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: Unable to set osd_memory_target on vm04 to 89843575: error parsing value: Value '89843575' is below minimum 939524096 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:34.569 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:34 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 59 KiB/s, 0 objects/s recovering 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: from='osd.6 v2:192.168.123.104:6808/1293006004' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: Detected new or changed devices on vm04 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: Adjusting osd_memory_target on vm04 to 87737k 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: Unable to set osd_memory_target on vm04 to 89843575: error parsing value: Value '89843575' is below minimum 939524096 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 59 KiB/s, 0 objects/s recovering 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: from='osd.6 v2:192.168.123.104:6808/1293006004' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:00:34.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-09T14:00:34.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: Detected new or changed devices on vm04 2026-03-09T14:00:34.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:34.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:34.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:34.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:34.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:34.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: Adjusting osd_memory_target on vm04 to 87737k 2026-03-09T14:00:34.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: Unable to set osd_memory_target on vm04 to 89843575: error parsing value: Value '89843575' is below minimum 939524096 2026-03-09T14:00:34.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:34.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:34.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:34 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:35.581 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:00:35 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:00:35.365+0000 7f8ad59a7640 -1 osd.6 0 waiting for initial osdmap 2026-03-09T14:00:35.581 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:00:35 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:00:35.371+0000 7f8ad0fbe640 -1 osd.6 37 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:00:35.582 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:35 vm04 ceph-mon[54203]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:00:35.582 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:35 vm04 ceph-mon[54203]: osdmap e36: 7 total, 6 up, 7 in 2026-03-09T14:00:35.582 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:35 vm04 ceph-mon[54203]: from='osd.6 v2:192.168.123.104:6808/1293006004' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:35.582 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:35 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:35.582 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:35 vm04 ceph-mon[54203]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:35.582 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:35 vm04 ceph-mon[54203]: from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:35.582 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:35 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:35.582 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:35 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:35.582 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:35 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[52586]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[52586]: osdmap e36: 7 total, 6 up, 7 in 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[52586]: from='osd.6 v2:192.168.123.104:6808/1293006004' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[52586]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[52586]: from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[58994]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[58994]: osdmap e36: 7 total, 6 up, 7 in 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[58994]: from='osd.6 v2:192.168.123.104:6808/1293006004' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[58994]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[58994]: from='client.24253 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T14:00:35.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:35 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: osdmap e37: 7 total, 6 up, 7 in 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/4070307461' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1c6b6a7e-1424-4a80-ab76-7670e4f673d4"}]: dispatch 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/4070307461' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1c6b6a7e-1424-4a80-ab76-7670e4f673d4"}]': finished 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: osd.6 v2:192.168.123.104:6808/1293006004 boot 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: osdmap e38: 8 total, 7 up, 8 in 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/2728996990' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: osdmap e39: 8 total, 7 up, 8 in 2026-03-09T14:00:36.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:36 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: osdmap e37: 7 total, 6 up, 7 in 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/4070307461' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1c6b6a7e-1424-4a80-ab76-7670e4f673d4"}]: dispatch 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/4070307461' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1c6b6a7e-1424-4a80-ab76-7670e4f673d4"}]': finished 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: osd.6 v2:192.168.123.104:6808/1293006004 boot 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: osdmap e38: 8 total, 7 up, 8 in 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/2728996990' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: osdmap e39: 8 total, 7 up, 8 in 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail; 54 KiB/s, 0 objects/s recovering 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: osdmap e37: 7 total, 6 up, 7 in 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/4070307461' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1c6b6a7e-1424-4a80-ab76-7670e4f673d4"}]: dispatch 2026-03-09T14:00:36.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/4070307461' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1c6b6a7e-1424-4a80-ab76-7670e4f673d4"}]': finished 2026-03-09T14:00:36.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: osd.6 v2:192.168.123.104:6808/1293006004 boot 2026-03-09T14:00:36.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: osdmap e38: 8 total, 7 up, 8 in 2026-03-09T14:00:36.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:00:36.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:36.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/2728996990' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T14:00:36.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: osdmap e39: 8 total, 7 up, 8 in 2026-03-09T14:00:36.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:36 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:37.740 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:37 vm04 ceph-mon[54203]: purged_snaps scrub starts 2026-03-09T14:00:37.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:37 vm04 ceph-mon[54203]: purged_snaps scrub ok 2026-03-09T14:00:37.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:37 vm04 ceph-mon[54203]: osdmap e40: 8 total, 7 up, 8 in 2026-03-09T14:00:37.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:37 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:37.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:37 vm03 ceph-mon[52586]: purged_snaps scrub starts 2026-03-09T14:00:37.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:37 vm03 ceph-mon[52586]: purged_snaps scrub ok 2026-03-09T14:00:37.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:37 vm03 ceph-mon[52586]: osdmap e40: 8 total, 7 up, 8 in 2026-03-09T14:00:37.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:37 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:37.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:37 vm03 ceph-mon[58994]: purged_snaps scrub starts 2026-03-09T14:00:37.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:37 vm03 ceph-mon[58994]: purged_snaps scrub ok 2026-03-09T14:00:37.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:37 vm03 ceph-mon[58994]: osdmap e40: 8 total, 7 up, 8 in 2026-03-09T14:00:37.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:37 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:39.205 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:39 vm04 ceph-mon[54203]: pgmap v81: 1 pgs: 1 peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:39.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:39 vm03 ceph-mon[52586]: pgmap v81: 1 pgs: 1 peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:39.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:39 vm03 ceph-mon[58994]: pgmap v81: 1 pgs: 1 peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:40.032 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:40 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:00:40.032 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:40 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:40 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:00:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:40 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:40 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-09T14:00:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:40 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:41.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:41 vm03 ceph-mon[52586]: pgmap v82: 1 pgs: 1 peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:41.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:41 vm03 ceph-mon[52586]: Deploying daemon osd.7 on vm04 2026-03-09T14:00:41.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:41 vm03 ceph-mon[58994]: pgmap v82: 1 pgs: 1 peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:41.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:41 vm03 ceph-mon[58994]: Deploying daemon osd.7 on vm04 2026-03-09T14:00:41.406 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:41 vm04 ceph-mon[54203]: pgmap v82: 1 pgs: 1 peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:41.406 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:41 vm04 ceph-mon[54203]: Deploying daemon osd.7 on vm04 2026-03-09T14:00:42.150 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:42 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:42.150 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:42 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:42.150 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:42 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:42.151 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:42 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:42.151 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:42 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:42.151 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:42 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:42.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:42 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:42.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:42 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:42.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:42 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:42.718 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 7 on host 'vm04' 2026-03-09T14:00:42.770 DEBUG:teuthology.orchestra.run.vm04:osd.7> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.7.service 2026-03-09T14:00:42.772 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-09T14:00:42.772 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd stat -f json 2026-03-09T14:00:42.956 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:43.057 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[58994]: pgmap v83: 1 pgs: 1 peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:43.057 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.057 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.057 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:43.057 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:43.057 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[52586]: pgmap v83: 1 pgs: 1 peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:43 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.215 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:00:43.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:43 vm04 ceph-mon[54203]: pgmap v83: 1 pgs: 1 peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:43.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:43.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:43.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:43.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:43 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:43.303 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":40,"num_osds":8,"num_up_osds":7,"osd_up_since":1773064835,"num_in_osds":8,"osd_in_since":1773064835,"num_remapped_pgs":0} 2026-03-09T14:00:43.532 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:00:43 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:00:43.235+0000 7fc9e7f52740 -1 osd.7 0 log_to_monitors true 2026-03-09T14:00:44.304 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd stat -f json 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/986724652' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='osd.7 v2:192.168.123.104:6812/3000381118' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/986724652' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='osd.7 v2:192.168.123.104:6812/3000381118' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.329 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:44.330 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:44.330 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:44 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:44.478 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/986724652' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='osd.7 v2:192.168.123.104:6812/3000381118' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:44 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:44.714 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:00:44.765 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":41,"num_osds":8,"num_up_osds":7,"osd_up_since":1773064835,"num_in_osds":8,"osd_in_since":1773064835,"num_remapped_pgs":0} 2026-03-09T14:00:45.491 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:00:45 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:00:45.065+0000 7fc9e3ed3640 -1 osd.7 0 waiting for initial osdmap 2026-03-09T14:00:45.491 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:00:45 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:00:45.075+0000 7fc9dfcfd640 -1 osd.7 42 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:00:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:45 vm04 ceph-mon[54203]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:45 vm04 ceph-mon[54203]: Detected new or changed devices on vm04 2026-03-09T14:00:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:45 vm04 ceph-mon[54203]: Adjusting osd_memory_target on vm04 to 65803k 2026-03-09T14:00:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:45 vm04 ceph-mon[54203]: Unable to set osd_memory_target on vm04 to 67382681: error parsing value: Value '67382681' is below minimum 939524096 2026-03-09T14:00:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:45 vm04 ceph-mon[54203]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:00:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:45 vm04 ceph-mon[54203]: osdmap e41: 8 total, 7 up, 8 in 2026-03-09T14:00:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:45 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:45 vm04 ceph-mon[54203]: from='osd.7 v2:192.168.123.104:6812/3000381118' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:45 vm04 ceph-mon[54203]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:45 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3464474607' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:00:45.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[52586]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:45.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[52586]: Detected new or changed devices on vm04 2026-03-09T14:00:45.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[52586]: Adjusting osd_memory_target on vm04 to 65803k 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[52586]: Unable to set osd_memory_target on vm04 to 67382681: error parsing value: Value '67382681' is below minimum 939524096 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[52586]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[52586]: osdmap e41: 8 total, 7 up, 8 in 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[52586]: from='osd.7 v2:192.168.123.104:6812/3000381118' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[52586]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3464474607' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[58994]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[58994]: Detected new or changed devices on vm04 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[58994]: Adjusting osd_memory_target on vm04 to 65803k 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[58994]: Unable to set osd_memory_target on vm04 to 67382681: error parsing value: Value '67382681' is below minimum 939524096 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[58994]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[58994]: osdmap e41: 8 total, 7 up, 8 in 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[58994]: from='osd.7 v2:192.168.123.104:6812/3000381118' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[58994]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T14:00:45.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:45 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3464474607' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:00:45.766 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd stat -f json 2026-03-09T14:00:45.934 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:46.171 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:00:46.227 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":43,"num_osds":8,"num_up_osds":8,"osd_up_since":1773064846,"num_in_osds":8,"osd_in_since":1773064835,"num_remapped_pgs":1} 2026-03-09T14:00:46.227 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd dump --format=json 2026-03-09T14:00:46.395 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:46.420 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[58994]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:46.420 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[58994]: osdmap e42: 8 total, 7 up, 8 in 2026-03-09T14:00:46.420 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:46.420 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:46.421 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[58994]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:00:46.421 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[58994]: osdmap e43: 8 total, 8 up, 8 in 2026-03-09T14:00:46.421 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:46.421 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[52586]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:46.421 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[52586]: osdmap e42: 8 total, 7 up, 8 in 2026-03-09T14:00:46.421 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:46.421 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:46.421 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[52586]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:00:46.421 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[52586]: osdmap e43: 8 total, 8 up, 8 in 2026-03-09T14:00:46.421 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:46 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:46 vm04 ceph-mon[54203]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T14:00:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:46 vm04 ceph-mon[54203]: osdmap e42: 8 total, 7 up, 8 in 2026-03-09T14:00:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:46 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:46 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:46 vm04 ceph-mon[54203]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:00:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:46 vm04 ceph-mon[54203]: osdmap e43: 8 total, 8 up, 8 in 2026-03-09T14:00:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:46 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:00:46.624 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:00:46.624 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":43,"fsid":"f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4","created":"2026-03-09T13:58:34.850179+0000","modified":"2026-03-09T14:00:46.011734+0000","last_up_change":"2026-03-09T14:00:46.011734+0000","last_in_change":"2026-03-09T14:00:35.473318+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T13:59:57.267231+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"5c050d28-3a63-4c87-aafc-d7703eb5e579","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":42,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6801","nonce":2121486584}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":2121486584}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":2121486584}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6803","nonce":2121486584}]},"public_addr":"192.168.123.103:6801/2121486584","cluster_addr":"192.168.123.103:6802/2121486584","heartbeat_back_addr":"192.168.123.103:6804/2121486584","heartbeat_front_addr":"192.168.123.103:6803/2121486584","state":["exists","up"]},{"osd":1,"uuid":"b0d835e0-d8bd-405c-99e9-38882318aaa8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":29,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6805","nonce":4232373287}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":4232373287}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":4232373287}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6807","nonce":4232373287}]},"public_addr":"192.168.123.103:6805/4232373287","cluster_addr":"192.168.123.103:6806/4232373287","heartbeat_back_addr":"192.168.123.103:6808/4232373287","heartbeat_front_addr":"192.168.123.103:6807/4232373287","state":["exists","up"]},{"osd":2,"uuid":"9f582930-68f3-4f39-9077-1b35b670203b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6809","nonce":872739083}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":872739083}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":872739083}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6811","nonce":872739083}]},"public_addr":"192.168.123.103:6809/872739083","cluster_addr":"192.168.123.103:6810/872739083","heartbeat_back_addr":"192.168.123.103:6812/872739083","heartbeat_front_addr":"192.168.123.103:6811/872739083","state":["exists","up"]},{"osd":3,"uuid":"5d0a3a4c-94c7-4259-a288-b3e930c3faf3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6813","nonce":2851532553}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":2851532553}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":2851532553}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6815","nonce":2851532553}]},"public_addr":"192.168.123.103:6813/2851532553","cluster_addr":"192.168.123.103:6814/2851532553","heartbeat_back_addr":"192.168.123.103:6816/2851532553","heartbeat_front_addr":"192.168.123.103:6815/2851532553","state":["exists","up"]},{"osd":4,"uuid":"1227a88f-b360-42a6-a96c-5fc1f52a1fbc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":288742704}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6801","nonce":288742704}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6803","nonce":288742704}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":288742704}]},"public_addr":"192.168.123.104:6800/288742704","cluster_addr":"192.168.123.104:6801/288742704","heartbeat_back_addr":"192.168.123.104:6803/288742704","heartbeat_front_addr":"192.168.123.104:6802/288742704","state":["exists","up"]},{"osd":5,"uuid":"816af1b8-9560-416b-b784-dce6f7c9ca65","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":33,"up_thru":34,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":2731397521}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6805","nonce":2731397521}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6807","nonce":2731397521}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":2731397521}]},"public_addr":"192.168.123.104:6804/2731397521","cluster_addr":"192.168.123.104:6805/2731397521","heartbeat_back_addr":"192.168.123.104:6807/2731397521","heartbeat_front_addr":"192.168.123.104:6806/2731397521","state":["exists","up"]},{"osd":6,"uuid":"25f29ca8-e401-49b8-826a-20452613ff7c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":39,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":1293006004}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6809","nonce":1293006004}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6811","nonce":1293006004}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":1293006004}]},"public_addr":"192.168.123.104:6808/1293006004","cluster_addr":"192.168.123.104:6809/1293006004","heartbeat_back_addr":"192.168.123.104:6811/1293006004","heartbeat_front_addr":"192.168.123.104:6810/1293006004","state":["exists","up"]},{"osd":7,"uuid":"1c6b6a7e-1424-4a80-ab76-7670e4f673d4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":3000381118}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6813","nonce":3000381118}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6815","nonce":3000381118}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":3000381118}]},"public_addr":"192.168.123.104:6812/3000381118","cluster_addr":"192.168.123.104:6813/3000381118","heartbeat_back_addr":"192.168.123.104:6815/3000381118","heartbeat_front_addr":"192.168.123.104:6814/3000381118","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:59:32.668690+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:59:42.669042+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:59:53.722649+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:04.294406+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:15.091582+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:24.474218+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:34.532335+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[{"pgid":"1.0","osds":[0,6,1]}],"primary_temp":[],"blocklist":{"192.168.123.103:0/1958783463":"2026-03-10T13:58:55.227586+0000","192.168.123.103:0/1195610588":"2026-03-10T13:58:55.227586+0000","192.168.123.103:0/1725027032":"2026-03-10T13:58:55.227586+0000","192.168.123.103:6800/2059456401":"2026-03-10T13:58:55.227586+0000","192.168.123.103:0/312194972":"2026-03-10T13:58:45.820778+0000","192.168.123.103:6800/1211621749":"2026-03-10T13:58:45.820778+0000","192.168.123.103:0/1847994174":"2026-03-10T13:58:45.820778+0000","192.168.123.103:0/3426585146":"2026-03-10T13:58:45.820778+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:00:46.668 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T13:59:57.267231+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '19', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-09T14:00:46.668 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd pool get .mgr pg_num 2026-03-09T14:00:46.839 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:47.110 INFO:teuthology.orchestra.run.vm03.stdout:pg_num: 1 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[52586]: purged_snaps scrub starts 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[52586]: purged_snaps scrub ok 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[52586]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2147387897' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1619275090' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[52586]: osdmap e44: 8 total, 8 up, 8 in 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[58994]: purged_snaps scrub starts 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[58994]: purged_snaps scrub ok 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[58994]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2147387897' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1619275090' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:00:47.180 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:47 vm03 ceph-mon[58994]: osdmap e44: 8 total, 8 up, 8 in 2026-03-09T14:00:47.181 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm03 2026-03-09T14:00:47.181 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch apply rgw foo.a --placement '1;vm03=foo.a' 2026-03-09T14:00:47.358 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:47.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:47 vm04 ceph-mon[54203]: purged_snaps scrub starts 2026-03-09T14:00:47.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:47 vm04 ceph-mon[54203]: purged_snaps scrub ok 2026-03-09T14:00:47.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:47 vm04 ceph-mon[54203]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-09T14:00:47.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:47 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2147387897' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T14:00:47.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:47 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1619275090' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:00:47.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:47 vm04 ceph-mon[54203]: osdmap e44: 8 total, 8 up, 8 in 2026-03-09T14:00:47.598 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled rgw.foo.a update... 2026-03-09T14:00:47.678 DEBUG:teuthology.orchestra.run.vm03:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@rgw.foo.a.service 2026-03-09T14:00:47.679 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm04 2026-03-09T14:00:47.679 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd pool create datapool 3 3 replicated 2026-03-09T14:00:47.870 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:48 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1097644535' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:00:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:48 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:48 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:48 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:48 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:48 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:48 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:00:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:48 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:00:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:48 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:48 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:48.354 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1097644535' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:00:48.355 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:48.355 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:48.355 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:48.355 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:48.355 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:48.355 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:00:48.355 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:00:48.355 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:48.355 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:48.355 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1097644535' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T14:00:48.356 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:48.356 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:48.356 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:48.356 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:48.356 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:48.356 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-09T14:00:48.356 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-09T14:00:48.356 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:48.356 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:48 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:48.356 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:00:48 vm03 systemd[1]: Starting Ceph rgw.foo.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:00:48.757 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:00:48 vm03 podman[82415]: 2026-03-09 14:00:48.351756519 +0000 UTC m=+0.015522977 container create bbd4b3b8cf7b9026c7d901d20ab4a4d4eec0d7faef8876732178564e1bb27bde (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-rgw-foo-a, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T14:00:48.757 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:00:48 vm03 podman[82415]: 2026-03-09 14:00:48.385059636 +0000 UTC m=+0.048826093 container init bbd4b3b8cf7b9026c7d901d20ab4a4d4eec0d7faef8876732178564e1bb27bde (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-rgw-foo-a, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T14:00:48.757 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:00:48 vm03 podman[82415]: 2026-03-09 14:00:48.387687655 +0000 UTC m=+0.051454123 container start bbd4b3b8cf7b9026c7d901d20ab4a4d4eec0d7faef8876732178564e1bb27bde (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-rgw-foo-a, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T14:00:48.757 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:00:48 vm03 bash[82415]: bbd4b3b8cf7b9026c7d901d20ab4a4d4eec0d7faef8876732178564e1bb27bde 2026-03-09T14:00:48.758 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:00:48 vm03 podman[82415]: 2026-03-09 14:00:48.345540886 +0000 UTC m=+0.009307363 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T14:00:48.758 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:00:48 vm03 systemd[1]: Started Ceph rgw.foo.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:00:49.106 INFO:teuthology.orchestra.run.vm04.stderr:pool 'datapool' created 2026-03-09T14:00:49.165 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- rbd pool init datapool 2026-03-09T14:00:49.352 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: from='client.24295 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm03=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: Deploying daemon rgw.foo.a on vm03 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: osdmap e45: 8 total, 8 up, 8 in 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/2254626137' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:49 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: from='client.24295 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm03=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: Deploying daemon rgw.foo.a on vm03 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: osdmap e45: 8 total, 8 up, 8 in 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/2254626137' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: from='client.24295 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm03=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:49.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-09T14:00:49.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: Deploying daemon rgw.foo.a on vm03 2026-03-09T14:00:49.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: osdmap e45: 8 total, 8 up, 8 in 2026-03-09T14:00:49.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/2254626137' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:00:49.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-09T14:00:49.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:49.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:49 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: osdmap e46: 8 total, 8 up, 8 in 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/496919433' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: Checking dashboard <-> RGW credentials 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/1339054035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/496919433' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/1339054035' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T14:00:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:50 vm04 ceph-mon[54203]: osdmap e47: 8 total, 8 up, 8 in 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: osdmap e46: 8 total, 8 up, 8 in 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/496919433' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: Checking dashboard <-> RGW credentials 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/1339054035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/496919433' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/1339054035' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[52586]: osdmap e47: 8 total, 8 up, 8 in 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: Saving service rgw.foo.a spec with placement vm03=foo.a;count:1 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: osdmap e46: 8 total, 8 up, 8 in 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/496919433' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:50.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:50.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:50.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:00:50.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: Checking dashboard <-> RGW credentials 2026-03-09T14:00:50.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/1339054035' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-09T14:00:50.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/496919433' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-09T14:00:50.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/1339054035' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-09T14:00:50.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:50 vm03 ceph-mon[58994]: osdmap e47: 8 total, 8 up, 8 in 2026-03-09T14:00:51.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:51 vm04 ceph-mon[54203]: pgmap v93: 36 pgs: 35 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:00:51.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:51 vm04 ceph-mon[54203]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:00:51.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:51 vm04 ceph-mon[54203]: osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:00:51.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:51 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:51.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:51 vm04 ceph-mon[54203]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:51.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:51 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:51.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:51 vm04 ceph-mon[54203]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[52586]: pgmap v93: 36 pgs: 35 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[52586]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[52586]: osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[52586]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[52586]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[58994]: pgmap v93: 36 pgs: 35 unknown, 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[58994]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[58994]: osdmap e48: 8 total, 8 up, 8 in 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[58994]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:51.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:51 vm03 ceph-mon[58994]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-09T14:00:52.182 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.104 --placement '1;vm04=iscsi.a' 2026-03-09T14:00:52.361 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:52.612 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled iscsi.datapool update... 2026-03-09T14:00:52.671 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-09T14:00:52.671 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:00:52.671 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T14:00:52.697 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:00:52.698 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-09T14:00:52.724 DEBUG:teuthology.orchestra.run.vm04:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@iscsi.iscsi.a.service 2026-03-09T14:00:52.766 INFO:tasks.cephadm:Adding prometheus.a on vm04 2026-03-09T14:00:52.766 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch apply prometheus '1;vm04=a' 2026-03-09T14:00:52.974 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:53.218 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled prometheus update... 2026-03-09T14:00:53.250 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:53 vm04 ceph-mon[54203]: pgmap v96: 68 pgs: 20 active+clean, 5 creating+peering, 43 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-09T14:00:53.250 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:53 vm04 ceph-mon[54203]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T14:00:53.250 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:53 vm04 ceph-mon[54203]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T14:00:53.250 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:53 vm04 ceph-mon[54203]: osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:00:53.250 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:53 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:53.250 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:53 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:53.283 DEBUG:teuthology.orchestra.run.vm04:prometheus.a> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@prometheus.a.service 2026-03-09T14:00:53.284 INFO:tasks.cephadm:Adding node-exporter.a on vm03 2026-03-09T14:00:53.284 INFO:tasks.cephadm:Adding node-exporter.b on vm04 2026-03-09T14:00:53.284 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch apply node-exporter '2;vm03=a;vm04=b' 2026-03-09T14:00:53.484 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[52586]: pgmap v96: 68 pgs: 20 active+clean, 5 creating+peering, 43 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-09T14:00:53.484 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[52586]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T14:00:53.484 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[52586]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T14:00:53.484 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[52586]: osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:00:53.485 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:53.485 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:53.485 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[58994]: pgmap v96: 68 pgs: 20 active+clean, 5 creating+peering, 43 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-09T14:00:53.485 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[58994]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T14:00:53.485 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[58994]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-09T14:00:53.485 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[58994]: osdmap e49: 8 total, 8 up, 8 in 2026-03-09T14:00:53.485 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:53.485 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:53 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:53.491 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:53.725 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled node-exporter update... 2026-03-09T14:00:53.779 DEBUG:teuthology.orchestra.run.vm03:node-exporter.a> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@node-exporter.a.service 2026-03-09T14:00:53.781 DEBUG:teuthology.orchestra.run.vm04:node-exporter.b> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@node-exporter.b.service 2026-03-09T14:00:53.782 INFO:tasks.cephadm:Adding alertmanager.a on vm03 2026-03-09T14:00:53.782 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch apply alertmanager '1;vm03=a' 2026-03-09T14:00:53.995 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:54.289 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled alertmanager update... 2026-03-09T14:00:54.291 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:54 vm04 ceph-mon[54203]: from='client.24349 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.104", "placement": "1;vm04=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:54.291 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:54 vm04 ceph-mon[54203]: Saving service iscsi.datapool spec with placement vm04=iscsi.a;count:1 2026-03-09T14:00:54.291 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:54 vm04 ceph-mon[54203]: osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:00:54.291 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:54 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.291 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:54 vm04 ceph-mon[54203]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.291 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:54 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.291 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:54 vm04 ceph-mon[54203]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.291 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:54 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:54.291 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:54 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:54.353 DEBUG:teuthology.orchestra.run.vm03:alertmanager.a> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@alertmanager.a.service 2026-03-09T14:00:54.354 INFO:tasks.cephadm:Adding grafana.a on vm04 2026-03-09T14:00:54.355 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph orch apply grafana '1;vm04=a' 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[58994]: from='client.24349 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.104", "placement": "1;vm04=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[58994]: Saving service iscsi.datapool spec with placement vm04=iscsi.a;count:1 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[58994]: osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[58994]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[58994]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[52586]: from='client.24349 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.104", "placement": "1;vm04=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[52586]: Saving service iscsi.datapool spec with placement vm04=iscsi.a;count:1 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[52586]: osdmap e50: 8 total, 8 up, 8 in 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[52586]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[52586]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-09T14:00:54.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:54.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:54 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:54.543 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:54.946 INFO:teuthology.orchestra.run.vm04.stdout:Scheduled grafana update... 2026-03-09T14:00:54.997 DEBUG:teuthology.orchestra.run.vm04:grafana.a> sudo journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@grafana.a.service 2026-03-09T14:00:54.999 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T14:00:54.999 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T14:00:55.207 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: from='client.24355 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm04=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: Saving service prometheus spec with placement vm04=a;count:1 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: pgmap v99: 100 pgs: 49 active+clean, 9 creating+peering, 42 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: from='client.24361 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm03=a;vm04=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: Saving service node-exporter spec with placement vm03=a;vm04=b;count:2 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:55 vm04 ceph-mon[54203]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: from='client.24355 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm04=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: Saving service prometheus spec with placement vm04=a;count:1 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: pgmap v99: 100 pgs: 49 active+clean, 9 creating+peering, 42 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: from='client.24361 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm03=a;vm04=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: Saving service node-exporter spec with placement vm03=a;vm04=b;count:2 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.281 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[52586]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: from='client.24355 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm04=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: Saving service prometheus spec with placement vm04=a;count:1 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: pgmap v99: 100 pgs: 49 active+clean, 9 creating+peering, 42 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: from='client.24361 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm03=a;vm04=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: Saving service node-exporter spec with placement vm03=a;vm04=b;count:2 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: osdmap e51: 8 total, 8 up, 8 in 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: osdmap e52: 8 total, 8 up, 8 in 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.282 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:55 vm03 ceph-mon[58994]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-09T14:00:55.505 INFO:teuthology.orchestra.run.vm03.stdout:[client.0] 2026-03-09T14:00:55.505 INFO:teuthology.orchestra.run.vm03.stdout: key = AQCX0q5pgMalHRAAfGi6G2hzV+zfkRAkxHcpFw== 2026-03-09T14:00:55.557 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:00:55.557 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T14:00:55.557 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T14:00:55.594 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T14:00:55.772 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.b/config 2026-03-09T14:00:56.051 INFO:teuthology.orchestra.run.vm04.stdout:[client.1] 2026-03-09T14:00:56.051 INFO:teuthology.orchestra.run.vm04.stdout: key = AQCY0q5p77SaAhAAu7/dZbI+MsACKPI2UtvHdQ== 2026-03-09T14:00:56.116 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T14:00:56.116 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T14:00:56.116 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T14:00:56.173 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T14:00:56.173 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T14:00:56.173 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph mgr dump --format=json 2026-03-09T14:00:56.357 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.24367 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: Saving service alertmanager spec with placement vm03=a;count:1 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.24373 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm04=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: Saving service grafana spec with placement vm04=a;count:1 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3732512353' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/345540463' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T14:00:56.425 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[52586]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.24367 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: Saving service alertmanager spec with placement vm03=a;count:1 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.24373 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm04=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: Saving service grafana spec with placement vm04=a;count:1 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3732512353' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/345540463' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:56.426 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:56 vm03 ceph-mon[58994]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.24367 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm03=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: Saving service alertmanager spec with placement vm03=a;count:1 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.24373 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm04=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: Saving service grafana spec with placement vm04=a;count:1 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3732512353' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/345540463' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: osdmap e53: 8 total, 8 up, 8 in 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2288277520' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:56 vm04 ceph-mon[54203]: from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:56.607 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:00:56.685 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":15,"flags":0,"active_gid":14150,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":1991233681}]},"active_addr":"192.168.123.103:6800/1991233681","active_change":"2026-03-09T13:58:55.227684+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24104,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.103:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":715706774}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":3364289776}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":3052704618}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":3703484955}]}]} 2026-03-09T14:00:56.686 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T14:00:56.686 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T14:00:56.686 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd dump --format=json 2026-03-09T14:00:56.878 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:57.117 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:00:57.117 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":53,"fsid":"f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4","created":"2026-03-09T13:58:34.850179+0000","modified":"2026-03-09T14:00:56.114943+0000","last_up_change":"2026-03-09T14:00:46.011734+0000","last_in_change":"2026-03-09T14:00:35.473318+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T13:59:57.267231+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-09T14:00:48.113726+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"49","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":49,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":".rgw.root","create_time":"2026-03-09T14:00:48.438986+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"48","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"default.rgw.log","create_time":"2026-03-09T14:00:50.216734+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"50","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T14:00:52.171829+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"52","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T14:00:54.240318+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"53","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"5c050d28-3a63-4c87-aafc-d7703eb5e579","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6801","nonce":2121486584}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":2121486584}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":2121486584}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6803","nonce":2121486584}]},"public_addr":"192.168.123.103:6801/2121486584","cluster_addr":"192.168.123.103:6802/2121486584","heartbeat_back_addr":"192.168.123.103:6804/2121486584","heartbeat_front_addr":"192.168.123.103:6803/2121486584","state":["exists","up"]},{"osd":1,"uuid":"b0d835e0-d8bd-405c-99e9-38882318aaa8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6805","nonce":4232373287}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":4232373287}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":4232373287}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6807","nonce":4232373287}]},"public_addr":"192.168.123.103:6805/4232373287","cluster_addr":"192.168.123.103:6806/4232373287","heartbeat_back_addr":"192.168.123.103:6808/4232373287","heartbeat_front_addr":"192.168.123.103:6807/4232373287","state":["exists","up"]},{"osd":2,"uuid":"9f582930-68f3-4f39-9077-1b35b670203b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6809","nonce":872739083}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":872739083}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":872739083}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6811","nonce":872739083}]},"public_addr":"192.168.123.103:6809/872739083","cluster_addr":"192.168.123.103:6810/872739083","heartbeat_back_addr":"192.168.123.103:6812/872739083","heartbeat_front_addr":"192.168.123.103:6811/872739083","state":["exists","up"]},{"osd":3,"uuid":"5d0a3a4c-94c7-4259-a288-b3e930c3faf3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6813","nonce":2851532553}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":2851532553}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":2851532553}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6815","nonce":2851532553}]},"public_addr":"192.168.123.103:6813/2851532553","cluster_addr":"192.168.123.103:6814/2851532553","heartbeat_back_addr":"192.168.123.103:6816/2851532553","heartbeat_front_addr":"192.168.123.103:6815/2851532553","state":["exists","up"]},{"osd":4,"uuid":"1227a88f-b360-42a6-a96c-5fc1f52a1fbc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":288742704}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6801","nonce":288742704}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6803","nonce":288742704}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":288742704}]},"public_addr":"192.168.123.104:6800/288742704","cluster_addr":"192.168.123.104:6801/288742704","heartbeat_back_addr":"192.168.123.104:6803/288742704","heartbeat_front_addr":"192.168.123.104:6802/288742704","state":["exists","up"]},{"osd":5,"uuid":"816af1b8-9560-416b-b784-dce6f7c9ca65","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":33,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":2731397521}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6805","nonce":2731397521}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6807","nonce":2731397521}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":2731397521}]},"public_addr":"192.168.123.104:6804/2731397521","cluster_addr":"192.168.123.104:6805/2731397521","heartbeat_back_addr":"192.168.123.104:6807/2731397521","heartbeat_front_addr":"192.168.123.104:6806/2731397521","state":["exists","up"]},{"osd":6,"uuid":"25f29ca8-e401-49b8-826a-20452613ff7c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":1293006004}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6809","nonce":1293006004}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6811","nonce":1293006004}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":1293006004}]},"public_addr":"192.168.123.104:6808/1293006004","cluster_addr":"192.168.123.104:6809/1293006004","heartbeat_back_addr":"192.168.123.104:6811/1293006004","heartbeat_front_addr":"192.168.123.104:6810/1293006004","state":["exists","up"]},{"osd":7,"uuid":"1c6b6a7e-1424-4a80-ab76-7670e4f673d4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":3000381118}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6813","nonce":3000381118}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6815","nonce":3000381118}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":3000381118}]},"public_addr":"192.168.123.104:6812/3000381118","cluster_addr":"192.168.123.104:6813/3000381118","heartbeat_back_addr":"192.168.123.104:6815/3000381118","heartbeat_front_addr":"192.168.123.104:6814/3000381118","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:59:32.668690+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:59:42.669042+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:59:53.722649+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:04.294406+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:15.091582+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:24.474218+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:34.532335+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:44.246501+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:0/1958783463":"2026-03-10T13:58:55.227586+0000","192.168.123.103:0/1195610588":"2026-03-10T13:58:55.227586+0000","192.168.123.103:0/1725027032":"2026-03-10T13:58:55.227586+0000","192.168.123.103:6800/2059456401":"2026-03-10T13:58:55.227586+0000","192.168.123.103:0/312194972":"2026-03-10T13:58:45.820778+0000","192.168.123.103:6800/1211621749":"2026-03-10T13:58:45.820778+0000","192.168.123.103:0/1847994174":"2026-03-10T13:58:45.820778+0000","192.168.123.103:0/3426585146":"2026-03-10T13:58:45.820778+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:00:57.196 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T14:00:57.196 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd dump --format=json 2026-03-09T14:00:57.196 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[52586]: pgmap v102: 132 pgs: 69 active+clean, 11 creating+peering, 52 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:00:57.196 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:57.196 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[52586]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:57.196 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2236916334' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:00:57.196 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2310509777' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:00:57.196 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[52586]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T14:00:57.196 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[52586]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T14:00:57.197 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[52586]: osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:00:57.197 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[58994]: pgmap v102: 132 pgs: 69 active+clean, 11 creating+peering, 52 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:00:57.197 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:57.197 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[58994]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:57.197 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2236916334' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:00:57.197 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2310509777' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:00:57.197 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[58994]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T14:00:57.197 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[58994]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T14:00:57.197 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:57 vm03 ceph-mon[58994]: osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:00:57.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:57 vm04 ceph-mon[54203]: pgmap v102: 132 pgs: 69 active+clean, 11 creating+peering, 52 unknown; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:00:57.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:57 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1073642403' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:57.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:57 vm04 ceph-mon[54203]: from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-09T14:00:57.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:57 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2236916334' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T14:00:57.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:57 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2310509777' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:00:57.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:57 vm04 ceph-mon[54203]: from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T14:00:57.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:57 vm04 ceph-mon[54203]: from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-09T14:00:57.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:57 vm04 ceph-mon[54203]: osdmap e54: 8 total, 8 up, 8 in 2026-03-09T14:00:57.492 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:57.542 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:00:57 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-rgw-foo-a[82425]: 2026-03-09T14:00:57.270+0000 7f93e1fb4980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-09T14:00:57.797 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:00:57.797 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":54,"fsid":"f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4","created":"2026-03-09T13:58:34.850179+0000","modified":"2026-03-09T14:00:57.118327+0000","last_up_change":"2026-03-09T14:00:46.011734+0000","last_in_change":"2026-03-09T14:00:35.473318+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T13:59:57.267231+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"datapool","create_time":"2026-03-09T14:00:48.113726+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"49","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":49,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":".rgw.root","create_time":"2026-03-09T14:00:48.438986+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"48","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"default.rgw.log","create_time":"2026-03-09T14:00:50.216734+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"50","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-09T14:00:52.171829+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"52","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-09T14:00:54.240318+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"54","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"5c050d28-3a63-4c87-aafc-d7703eb5e579","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6801","nonce":2121486584}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":2121486584}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":2121486584}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6803","nonce":2121486584}]},"public_addr":"192.168.123.103:6801/2121486584","cluster_addr":"192.168.123.103:6802/2121486584","heartbeat_back_addr":"192.168.123.103:6804/2121486584","heartbeat_front_addr":"192.168.123.103:6803/2121486584","state":["exists","up"]},{"osd":1,"uuid":"b0d835e0-d8bd-405c-99e9-38882318aaa8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6805","nonce":4232373287}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":4232373287}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":4232373287}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6807","nonce":4232373287}]},"public_addr":"192.168.123.103:6805/4232373287","cluster_addr":"192.168.123.103:6806/4232373287","heartbeat_back_addr":"192.168.123.103:6808/4232373287","heartbeat_front_addr":"192.168.123.103:6807/4232373287","state":["exists","up"]},{"osd":2,"uuid":"9f582930-68f3-4f39-9077-1b35b670203b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6809","nonce":872739083}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":872739083}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":872739083}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6811","nonce":872739083}]},"public_addr":"192.168.123.103:6809/872739083","cluster_addr":"192.168.123.103:6810/872739083","heartbeat_back_addr":"192.168.123.103:6812/872739083","heartbeat_front_addr":"192.168.123.103:6811/872739083","state":["exists","up"]},{"osd":3,"uuid":"5d0a3a4c-94c7-4259-a288-b3e930c3faf3","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6813","nonce":2851532553}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":2851532553}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":2851532553}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6815","nonce":2851532553}]},"public_addr":"192.168.123.103:6813/2851532553","cluster_addr":"192.168.123.103:6814/2851532553","heartbeat_back_addr":"192.168.123.103:6816/2851532553","heartbeat_front_addr":"192.168.123.103:6815/2851532553","state":["exists","up"]},{"osd":4,"uuid":"1227a88f-b360-42a6-a96c-5fc1f52a1fbc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":28,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":288742704}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6801","nonce":288742704}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6803","nonce":288742704}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":288742704}]},"public_addr":"192.168.123.104:6800/288742704","cluster_addr":"192.168.123.104:6801/288742704","heartbeat_back_addr":"192.168.123.104:6803/288742704","heartbeat_front_addr":"192.168.123.104:6802/288742704","state":["exists","up"]},{"osd":5,"uuid":"816af1b8-9560-416b-b784-dce6f7c9ca65","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":33,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":2731397521}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6805","nonce":2731397521}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6807","nonce":2731397521}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":2731397521}]},"public_addr":"192.168.123.104:6804/2731397521","cluster_addr":"192.168.123.104:6805/2731397521","heartbeat_back_addr":"192.168.123.104:6807/2731397521","heartbeat_front_addr":"192.168.123.104:6806/2731397521","state":["exists","up"]},{"osd":6,"uuid":"25f29ca8-e401-49b8-826a-20452613ff7c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":50,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6808","nonce":1293006004}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6809","nonce":1293006004}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6811","nonce":1293006004}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6810","nonce":1293006004}]},"public_addr":"192.168.123.104:6808/1293006004","cluster_addr":"192.168.123.104:6809/1293006004","heartbeat_back_addr":"192.168.123.104:6811/1293006004","heartbeat_front_addr":"192.168.123.104:6810/1293006004","state":["exists","up"]},{"osd":7,"uuid":"1c6b6a7e-1424-4a80-ab76-7670e4f673d4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":43,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6812","nonce":3000381118}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6813","nonce":3000381118}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6815","nonce":3000381118}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6814","nonce":3000381118}]},"public_addr":"192.168.123.104:6812/3000381118","cluster_addr":"192.168.123.104:6813/3000381118","heartbeat_back_addr":"192.168.123.104:6815/3000381118","heartbeat_front_addr":"192.168.123.104:6814/3000381118","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:59:32.668690+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:59:42.669042+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T13:59:53.722649+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:04.294406+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:15.091582+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:24.474218+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:34.532335+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T14:00:44.246501+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:0/1958783463":"2026-03-10T13:58:55.227586+0000","192.168.123.103:0/1195610588":"2026-03-10T13:58:55.227586+0000","192.168.123.103:0/1725027032":"2026-03-10T13:58:55.227586+0000","192.168.123.103:6800/2059456401":"2026-03-10T13:58:55.227586+0000","192.168.123.103:0/312194972":"2026-03-10T13:58:45.820778+0000","192.168.123.103:6800/1211621749":"2026-03-10T13:58:45.820778+0000","192.168.123.103:0/1847994174":"2026-03-10T13:58:45.820778+0000","192.168.123.103:0/3426585146":"2026-03-10T13:58:45.820778+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T14:00:57.874 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph tell osd.0 flush_pg_stats 2026-03-09T14:00:57.874 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph tell osd.1 flush_pg_stats 2026-03-09T14:00:57.874 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph tell osd.2 flush_pg_stats 2026-03-09T14:00:57.874 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph tell osd.3 flush_pg_stats 2026-03-09T14:00:57.874 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph tell osd.4 flush_pg_stats 2026-03-09T14:00:57.875 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph tell osd.5 flush_pg_stats 2026-03-09T14:00:57.875 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph tell osd.6 flush_pg_stats 2026-03-09T14:00:57.875 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph tell osd.7 flush_pg_stats 2026-03-09T14:00:58.640 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:58.652 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:58 vm04 systemd[1]: Starting Ceph iscsi.iscsi.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:00:58.668 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:58.675 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[52586]: pgmap v105: 132 pgs: 108 active+clean, 1 creating+activating, 11 creating+peering, 12 unknown; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 8 op/s 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[52586]: Deploying daemon iscsi.iscsi.a on vm04 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/685807605' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[52586]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[52586]: Cluster is now healthy 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[58994]: pgmap v105: 132 pgs: 108 active+clean, 1 creating+activating, 11 creating+peering, 12 unknown; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 8 op/s 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[58994]: Deploying daemon iscsi.iscsi.a on vm04 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/685807605' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[58994]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T14:00:58.915 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:58 vm03 ceph-mon[58994]: Cluster is now healthy 2026-03-09T14:00:58.922 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:58 vm04 podman[77973]: 2026-03-09 14:00:58.648829732 +0000 UTC m=+0.017022343 container create af1255a6c4e865acd33d0c64288c121c3b34b291274ff5dc4fa7fd4144116a82 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a, CEPH_REF=squid, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T14:00:58.922 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:58 vm04 podman[77973]: 2026-03-09 14:00:58.694712283 +0000 UTC m=+0.062904904 container init af1255a6c4e865acd33d0c64288c121c3b34b291274ff5dc4fa7fd4144116a82 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) 2026-03-09T14:00:58.922 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:58 vm04 podman[77973]: 2026-03-09 14:00:58.698389346 +0000 UTC m=+0.066581967 container start af1255a6c4e865acd33d0c64288c121c3b34b291274ff5dc4fa7fd4144116a82 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T14:00:58.922 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:58 vm04 bash[77973]: af1255a6c4e865acd33d0c64288c121c3b34b291274ff5dc4fa7fd4144116a82 2026-03-09T14:00:58.922 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:58 vm04 podman[77973]: 2026-03-09 14:00:58.641825943 +0000 UTC m=+0.010018564 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T14:00:58.922 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:58 vm04 systemd[1]: Started Ceph iscsi.iscsi.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:00:58.922 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:58 vm04 ceph-mon[54203]: pgmap v105: 132 pgs: 108 active+clean, 1 creating+activating, 11 creating+peering, 12 unknown; 451 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 8 op/s 2026-03-09T14:00:58.922 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:58 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:58.922 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:58 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:58.922 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:58 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:58.922 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:58 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-09T14:00:58.922 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:58 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-09T14:00:58.922 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:58 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:00:58.922 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:58 vm04 ceph-mon[54203]: Deploying daemon iscsi.iscsi.a on vm04 2026-03-09T14:00:58.922 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:58 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/685807605' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:00:58.922 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:58 vm04 ceph-mon[54203]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-09T14:00:58.922 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:58 vm04 ceph-mon[54203]: Cluster is now healthy 2026-03-09T14:00:59.112 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:59.122 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:59.130 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:59.132 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:59.239 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:58 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug Started the configuration object watcher 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:58 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug Checking for config object changes every 1s 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:58 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug Processing osd blocklist entries for this node 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug Reading the configuration object to update local LIO configuration 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug Configuration does not have an entry for this host(vm04.local) - nothing to define to LIO 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: * Environment: production 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: Use a production WSGI server instead. 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: * Debug mode: off 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug * Running on all addresses. 2026-03-09T14:00:59.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T14:00:59.242 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: * Running on all addresses. 2026-03-09T14:00:59.242 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-09T14:00:59.242 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T14:00:59.242 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:00:59 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-09T14:00:59.628 INFO:teuthology.orchestra.run.vm03.stdout:51539607568 2026-03-09T14:00:59.629 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.1 2026-03-09T14:00:59.642 INFO:teuthology.orchestra.run.vm03.stdout:184683593732 2026-03-09T14:00:59.642 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.7 2026-03-09T14:00:59.653 INFO:teuthology.orchestra.run.vm03.stdout:120259084298 2026-03-09T14:00:59.653 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.4 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[52586]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[52586]: Deploying daemon prometheus.a on vm04 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[52586]: from='client.? 192.168.123.104:0/132050068' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[58994]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:00:59.761 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[58994]: Deploying daemon prometheus.a on vm04 2026-03-09T14:00:59.762 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:00:59 vm03 ceph-mon[58994]: from='client.? 192.168.123.104:0/132050068' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:01:00.002 INFO:teuthology.orchestra.run.vm03.stdout:98784247820 2026-03-09T14:01:00.002 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.3 2026-03-09T14:01:00.065 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:00.065 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:00.065 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:00.065 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:59 vm04 ceph-mon[54203]: Checking pool "datapool" exists for service iscsi.datapool 2026-03-09T14:01:00.065 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:59 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:00.065 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:59 vm04 ceph-mon[54203]: Deploying daemon prometheus.a on vm04 2026-03-09T14:01:00.065 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:00:59 vm04 ceph-mon[54203]: from='client.? 192.168.123.104:0/132050068' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-09T14:01:00.075 INFO:teuthology.orchestra.run.vm03.stdout:34359738386 2026-03-09T14:01:00.075 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.0 2026-03-09T14:01:00.130 INFO:teuthology.orchestra.run.vm03.stdout:68719476750 2026-03-09T14:01:00.130 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.2 2026-03-09T14:01:00.132 INFO:teuthology.orchestra.run.vm03.stdout:141733920776 2026-03-09T14:01:00.132 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.5 2026-03-09T14:01:00.137 INFO:teuthology.orchestra.run.vm03.stdout:163208757254 2026-03-09T14:01:00.137 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.6 2026-03-09T14:01:00.509 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:00.515 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:00.583 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:00.984 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:01.045 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:00 vm03 ceph-mon[58994]: pgmap v106: 132 pgs: 117 active+clean, 1 creating+activating, 11 creating+peering, 3 unknown; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 3.1 KiB/s wr, 58 op/s 2026-03-09T14:01:01.045 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:00 vm03 ceph-mon[58994]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T14:01:01.045 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:00 vm03 ceph-mon[52586]: pgmap v106: 132 pgs: 117 active+clean, 1 creating+activating, 11 creating+peering, 3 unknown; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 3.1 KiB/s wr, 58 op/s 2026-03-09T14:01:01.045 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:00 vm03 ceph-mon[52586]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T14:01:01.241 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:01.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:00 vm04 ceph-mon[54203]: pgmap v106: 132 pgs: 117 active+clean, 1 creating+activating, 11 creating+peering, 3 unknown; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 3.1 KiB/s wr, 58 op/s 2026-03-09T14:01:01.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:00 vm04 ceph-mon[54203]: mgrmap e16: y(active, since 2m), standbys: x 2026-03-09T14:01:01.322 INFO:teuthology.orchestra.run.vm03.stdout:184683593731 2026-03-09T14:01:01.332 INFO:teuthology.orchestra.run.vm03.stdout:120259084297 2026-03-09T14:01:01.349 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:01.460 INFO:teuthology.orchestra.run.vm03.stdout:51539607567 2026-03-09T14:01:01.548 INFO:tasks.cephadm.ceph_manager.ceph:need seq 120259084298 got 120259084297 for osd.4 2026-03-09T14:01:01.561 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:01.699 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607568 got 51539607567 for osd.1 2026-03-09T14:01:01.719 INFO:tasks.cephadm.ceph_manager.ceph:need seq 184683593732 got 184683593731 for osd.7 2026-03-09T14:01:01.757 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:01.855 INFO:teuthology.orchestra.run.vm03.stdout:98784247819 2026-03-09T14:01:02.008 INFO:teuthology.orchestra.run.vm03.stdout:34359738385 2026-03-09T14:01:02.074 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738386 got 34359738385 for osd.0 2026-03-09T14:01:02.097 INFO:tasks.cephadm.ceph_manager.ceph:need seq 98784247820 got 98784247819 for osd.3 2026-03-09T14:01:02.125 INFO:teuthology.orchestra.run.vm03.stdout:141733920777 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[52586]: osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/144508069' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2421538082' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3909641072' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1990406190' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2662754306' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[58994]: osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/144508069' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2421538082' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3909641072' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1990406190' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:01:02.125 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:02 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2662754306' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:01:02.212 INFO:tasks.cephadm.ceph_manager.ceph:need seq 141733920776 got 141733920777 for osd.5 2026-03-09T14:01:02.212 DEBUG:teuthology.parallel:result is None 2026-03-09T14:01:02.229 INFO:teuthology.orchestra.run.vm03.stdout:68719476750 2026-03-09T14:01:02.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:02 vm04 ceph-mon[54203]: osdmap e55: 8 total, 8 up, 8 in 2026-03-09T14:01:02.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:02 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/144508069' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:01:02.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:02 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2421538082' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:01:02.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:02 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3909641072' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:01:02.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:02 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1990406190' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:01:02.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:02 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2662754306' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:01:02.297 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476750 got 68719476750 for osd.2 2026-03-09T14:01:02.297 DEBUG:teuthology.parallel:result is None 2026-03-09T14:01:02.366 INFO:teuthology.orchestra.run.vm03.stdout:163208757254 2026-03-09T14:01:02.427 INFO:tasks.cephadm.ceph_manager.ceph:need seq 163208757254 got 163208757254 for osd.6 2026-03-09T14:01:02.427 DEBUG:teuthology.parallel:result is None 2026-03-09T14:01:02.548 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.4 2026-03-09T14:01:02.699 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.1 2026-03-09T14:01:02.720 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.7 2026-03-09T14:01:02.813 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:03.066 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:03.074 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.0 2026-03-09T14:01:03.098 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph osd last-stat-seq osd.3 2026-03-09T14:01:03.193 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:03 vm03 ceph-mon[52586]: pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 77 KiB/s rd, 6.2 KiB/s wr, 189 op/s 2026-03-09T14:01:03.193 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:03 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/10708859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:01:03.193 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:03 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1515906710' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:01:03.193 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:03 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2337650756' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:01:03.193 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:03 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:03.193 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:03 vm03 ceph-mon[58994]: pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 77 KiB/s rd, 6.2 KiB/s wr, 189 op/s 2026-03-09T14:01:03.193 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:03 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/10708859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:01:03.194 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:03 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1515906710' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:01:03.194 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:03 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2337650756' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:01:03.194 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:03 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:03.234 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:03.252 INFO:teuthology.orchestra.run.vm03.stdout:120259084298 2026-03-09T14:01:03.418 INFO:tasks.cephadm.ceph_manager.ceph:need seq 120259084298 got 120259084298 for osd.4 2026-03-09T14:01:03.418 DEBUG:teuthology.parallel:result is None 2026-03-09T14:01:03.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:03 vm04 ceph-mon[54203]: pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 77 KiB/s rd, 6.2 KiB/s wr, 189 op/s 2026-03-09T14:01:03.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:03 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/10708859' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-09T14:01:03.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:03 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1515906710' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T14:01:03.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:03 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2337650756' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-09T14:01:03.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:03 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:03.579 INFO:teuthology.orchestra.run.vm03.stdout:51539607569 2026-03-09T14:01:03.628 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:03.648 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:03.754 INFO:teuthology.orchestra.run.vm03.stdout:184683593733 2026-03-09T14:01:03.784 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607568 got 51539607569 for osd.1 2026-03-09T14:01:03.784 DEBUG:teuthology.parallel:result is None 2026-03-09T14:01:03.859 INFO:tasks.cephadm.ceph_manager.ceph:need seq 184683593732 got 184683593733 for osd.7 2026-03-09T14:01:03.859 DEBUG:teuthology.parallel:result is None 2026-03-09T14:01:03.982 INFO:teuthology.orchestra.run.vm03.stdout:34359738387 2026-03-09T14:01:04.022 INFO:teuthology.orchestra.run.vm03.stdout:98784247820 2026-03-09T14:01:04.168 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738386 got 34359738387 for osd.0 2026-03-09T14:01:04.168 DEBUG:teuthology.parallel:result is None 2026-03-09T14:01:04.192 INFO:tasks.cephadm.ceph_manager.ceph:need seq 98784247820 got 98784247820 for osd.3 2026-03-09T14:01:04.193 DEBUG:teuthology.parallel:result is None 2026-03-09T14:01:04.193 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T14:01:04.193 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph pg dump --format=json 2026-03-09T14:01:04.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:04 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/801810363' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:01:04.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:04 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3851031888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:01:04.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:04 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2415537747' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:01:04.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:04 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1929870198' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:01:04.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:04 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1283398624' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:01:04.257 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:04 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/801810363' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:01:04.257 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:04 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3851031888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:01:04.257 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:04 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2415537747' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:01:04.257 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:04 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1929870198' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:01:04.257 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:04 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1283398624' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:01:04.395 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:04 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/801810363' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-09T14:01:04.396 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:04 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3851031888' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T14:01:04.396 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:04 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2415537747' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-09T14:01:04.396 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:04 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1929870198' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T14:01:04.396 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:04 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1283398624' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-09T14:01:04.419 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:04.664 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:01:04.665 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-09T14:01:04.727 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":109,"stamp":"2026-03-09T14:01:03.261767+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":776,"num_read_kb":519,"num_write":493,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":505,"ondisk_log_size":505,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":220408,"kb_used_data":5728,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518984,"statfs":{"total":171765137408,"available":171539439616,"internally_reserved":0,"allocated":5865472,"data_stored":3052363,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":76,"apply_latency_ms":76,"commit_latency_ns":76000000,"apply_latency_ns":76000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":4490,"num_objects":186,"num_object_clones":0,"num_object_copies":558,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":186,"num_whiteouts":0,"num_read":713,"num_read_kb":465,"num_write":423,"num_write_kb":37,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"7.142344"},"pg_stats":[{"pgid":"3.1f","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137616+0000","last_change":"2026-03-09T14:00:50.124706+0000","last_active":"2026-03-09T14:00:57.137616+0000","last_peered":"2026-03-09T14:00:57.137616+0000","last_clean":"2026-03-09T14:00:57.137616+0000","last_became_active":"2026-03-09T14:00:50.124479+0000","last_became_peered":"2026-03-09T14:00:50.124479+0000","last_unstale":"2026-03-09T14:00:57.137616+0000","last_undegraded":"2026-03-09T14:00:57.137616+0000","last_fullsized":"2026-03-09T14:00:57.137616+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:47:57.185062+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.18","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191375+0000","last_change":"2026-03-09T14:00:52.145594+0000","last_active":"2026-03-09T14:01:01.191375+0000","last_peered":"2026-03-09T14:01:01.191375+0000","last_clean":"2026-03-09T14:01:01.191375+0000","last_became_active":"2026-03-09T14:00:52.145336+0000","last_became_peered":"2026-03-09T14:00:52.145336+0000","last_unstale":"2026-03-09T14:01:01.191375+0000","last_undegraded":"2026-03-09T14:01:01.191375+0000","last_fullsized":"2026-03-09T14:01:01.191375+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:13:18.777016+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125346+0000","last_change":"2026-03-09T14:00:54.135389+0000","last_active":"2026-03-09T14:00:57.125346+0000","last_peered":"2026-03-09T14:00:57.125346+0000","last_clean":"2026-03-09T14:00:57.125346+0000","last_became_active":"2026-03-09T14:00:54.135294+0000","last_became_peered":"2026-03-09T14:00:54.135294+0000","last_unstale":"2026-03-09T14:00:57.125346+0000","last_undegraded":"2026-03-09T14:00:57.125346+0000","last_fullsized":"2026-03-09T14:00:57.125346+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:15:00.725084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191375+0000","last_change":"2026-03-09T14:00:56.141611+0000","last_active":"2026-03-09T14:01:01.191375+0000","last_peered":"2026-03-09T14:01:01.191375+0000","last_clean":"2026-03-09T14:01:01.191375+0000","last_became_active":"2026-03-09T14:00:56.141165+0000","last_became_peered":"2026-03-09T14:00:56.141165+0000","last_unstale":"2026-03-09T14:01:01.191375+0000","last_undegraded":"2026-03-09T14:01:01.191375+0000","last_fullsized":"2026-03-09T14:01:01.191375+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:14:30.173549+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1b","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187885+0000","last_change":"2026-03-09T14:00:56.637703+0000","last_active":"2026-03-09T14:01:01.187885+0000","last_peered":"2026-03-09T14:01:01.187885+0000","last_clean":"2026-03-09T14:01:01.187885+0000","last_became_active":"2026-03-09T14:00:56.636353+0000","last_became_peered":"2026-03-09T14:00:56.636353+0000","last_unstale":"2026-03-09T14:01:01.187885+0000","last_undegraded":"2026-03-09T14:01:01.187885+0000","last_fullsized":"2026-03-09T14:01:01.187885+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:34:05.734008+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1e","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187908+0000","last_change":"2026-03-09T14:00:50.135838+0000","last_active":"2026-03-09T14:01:01.187908+0000","last_peered":"2026-03-09T14:01:01.187908+0000","last_clean":"2026-03-09T14:01:01.187908+0000","last_became_active":"2026-03-09T14:00:50.126256+0000","last_became_peered":"2026-03-09T14:00:50.126256+0000","last_unstale":"2026-03-09T14:01:01.187908+0000","last_undegraded":"2026-03-09T14:01:01.187908+0000","last_fullsized":"2026-03-09T14:01:01.187908+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:05:43.294214+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.19","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187932+0000","last_change":"2026-03-09T14:00:52.125828+0000","last_active":"2026-03-09T14:01:01.187932+0000","last_peered":"2026-03-09T14:01:01.187932+0000","last_clean":"2026-03-09T14:01:01.187932+0000","last_became_active":"2026-03-09T14:00:52.125688+0000","last_became_peered":"2026-03-09T14:00:52.125688+0000","last_unstale":"2026-03-09T14:01:01.187932+0000","last_undegraded":"2026-03-09T14:01:01.187932+0000","last_fullsized":"2026-03-09T14:01:01.187932+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:03:39.962511+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2,0],"acting":[3,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190465+0000","last_change":"2026-03-09T14:00:54.159732+0000","last_active":"2026-03-09T14:01:01.190465+0000","last_peered":"2026-03-09T14:01:01.190465+0000","last_clean":"2026-03-09T14:01:01.190465+0000","last_became_active":"2026-03-09T14:00:54.159573+0000","last_became_peered":"2026-03-09T14:00:54.159573+0000","last_unstale":"2026-03-09T14:01:01.190465+0000","last_undegraded":"2026-03-09T14:01:01.190465+0000","last_fullsized":"2026-03-09T14:01:01.190465+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:48:38.542882+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1d","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021169+0000","last_change":"2026-03-09T14:00:50.115103+0000","last_active":"2026-03-09T14:01:01.021169+0000","last_peered":"2026-03-09T14:01:01.021169+0000","last_clean":"2026-03-09T14:01:01.021169+0000","last_became_active":"2026-03-09T14:00:50.112433+0000","last_became_peered":"2026-03-09T14:00:50.112433+0000","last_unstale":"2026-03-09T14:01:01.021169+0000","last_undegraded":"2026-03-09T14:01:01.021169+0000","last_fullsized":"2026-03-09T14:01:01.021169+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:31:15.423838+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1a","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190610+0000","last_change":"2026-03-09T14:00:52.144870+0000","last_active":"2026-03-09T14:01:01.190610+0000","last_peered":"2026-03-09T14:01:01.190610+0000","last_clean":"2026-03-09T14:01:01.190610+0000","last_became_active":"2026-03-09T14:00:52.144703+0000","last_became_peered":"2026-03-09T14:00:52.144703+0000","last_unstale":"2026-03-09T14:01:01.190610+0000","last_undegraded":"2026-03-09T14:01:01.190610+0000","last_fullsized":"2026-03-09T14:01:01.190610+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:26:39.769449+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,0],"acting":[4,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021207+0000","last_change":"2026-03-09T14:00:54.135182+0000","last_active":"2026-03-09T14:01:01.021207+0000","last_peered":"2026-03-09T14:01:01.021207+0000","last_clean":"2026-03-09T14:01:01.021207+0000","last_became_active":"2026-03-09T14:00:54.135052+0000","last_became_peered":"2026-03-09T14:00:54.135052+0000","last_unstale":"2026-03-09T14:01:01.021207+0000","last_undegraded":"2026-03-09T14:01:01.021207+0000","last_fullsized":"2026-03-09T14:01:01.021207+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:34:53.793325+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137836+0000","last_change":"2026-03-09T14:00:56.152695+0000","last_active":"2026-03-09T14:00:57.137836+0000","last_peered":"2026-03-09T14:00:57.137836+0000","last_clean":"2026-03-09T14:00:57.137836+0000","last_became_active":"2026-03-09T14:00:56.152570+0000","last_became_peered":"2026-03-09T14:00:56.152570+0000","last_unstale":"2026-03-09T14:00:57.137836+0000","last_undegraded":"2026-03-09T14:00:57.137836+0000","last_fullsized":"2026-03-09T14:00:57.137836+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:20:44.317207+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021432+0000","last_change":"2026-03-09T14:00:50.115695+0000","last_active":"2026-03-09T14:01:01.021432+0000","last_peered":"2026-03-09T14:01:01.021432+0000","last_clean":"2026-03-09T14:01:01.021432+0000","last_became_active":"2026-03-09T14:00:50.112569+0000","last_became_peered":"2026-03-09T14:00:50.112569+0000","last_unstale":"2026-03-09T14:01:01.021432+0000","last_undegraded":"2026-03-09T14:01:01.021432+0000","last_fullsized":"2026-03-09T14:01:01.021432+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:26:53.019912+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1b","version":"54'5","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190682+0000","last_change":"2026-03-09T14:00:52.165696+0000","last_active":"2026-03-09T14:01:01.190682+0000","last_peered":"2026-03-09T14:01:01.190682+0000","last_clean":"2026-03-09T14:01:01.190682+0000","last_became_active":"2026-03-09T14:00:52.165615+0000","last_became_peered":"2026-03-09T14:00:52.165615+0000","last_unstale":"2026-03-09T14:01:01.190682+0000","last_undegraded":"2026-03-09T14:01:01.190682+0000","last_fullsized":"2026-03-09T14:01:01.190682+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:46:05.967459+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,1],"acting":[4,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159452+0000","last_change":"2026-03-09T14:00:54.157187+0000","last_active":"2026-03-09T14:00:57.159452+0000","last_peered":"2026-03-09T14:00:57.159452+0000","last_clean":"2026-03-09T14:00:57.159452+0000","last_became_active":"2026-03-09T14:00:54.156697+0000","last_became_peered":"2026-03-09T14:00:54.156697+0000","last_unstale":"2026-03-09T14:00:57.159452+0000","last_undegraded":"2026-03-09T14:00:57.159452+0000","last_fullsized":"2026-03-09T14:00:57.159452+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:57:45.992426+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021402+0000","last_change":"2026-03-09T14:00:56.151137+0000","last_active":"2026-03-09T14:01:01.021402+0000","last_peered":"2026-03-09T14:01:01.021402+0000","last_clean":"2026-03-09T14:01:01.021402+0000","last_became_active":"2026-03-09T14:00:56.150968+0000","last_became_peered":"2026-03-09T14:00:56.150968+0000","last_unstale":"2026-03-09T14:01:01.021402+0000","last_undegraded":"2026-03-09T14:01:01.021402+0000","last_fullsized":"2026-03-09T14:01:01.021402+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:48:04.441308+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191299+0000","last_change":"2026-03-09T14:00:56.634838+0000","last_active":"2026-03-09T14:01:01.191299+0000","last_peered":"2026-03-09T14:01:01.191299+0000","last_clean":"2026-03-09T14:01:01.191299+0000","last_became_active":"2026-03-09T14:00:56.634720+0000","last_became_peered":"2026-03-09T14:00:56.634720+0000","last_unstale":"2026-03-09T14:01:01.191299+0000","last_undegraded":"2026-03-09T14:01:01.191299+0000","last_fullsized":"2026-03-09T14:01:01.191299+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:17:18.798862+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1b","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137674+0000","last_change":"2026-03-09T14:00:50.117424+0000","last_active":"2026-03-09T14:00:57.137674+0000","last_peered":"2026-03-09T14:00:57.137674+0000","last_clean":"2026-03-09T14:00:57.137674+0000","last_became_active":"2026-03-09T14:00:50.117125+0000","last_became_peered":"2026-03-09T14:00:50.117125+0000","last_unstale":"2026-03-09T14:00:57.137674+0000","last_undegraded":"2026-03-09T14:00:57.137674+0000","last_fullsized":"2026-03-09T14:00:57.137674+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:49:23.702200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.1c","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034393+0000","last_change":"2026-03-09T14:00:52.126262+0000","last_active":"2026-03-09T14:01:01.034393+0000","last_peered":"2026-03-09T14:01:01.034393+0000","last_clean":"2026-03-09T14:01:01.034393+0000","last_became_active":"2026-03-09T14:00:52.126157+0000","last_became_peered":"2026-03-09T14:00:52.126157+0000","last_unstale":"2026-03-09T14:01:01.034393+0000","last_undegraded":"2026-03-09T14:01:01.034393+0000","last_fullsized":"2026-03-09T14:01:01.034393+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:54:31.188950+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,3],"acting":[2,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125515+0000","last_change":"2026-03-09T14:00:54.152260+0000","last_active":"2026-03-09T14:00:57.125515+0000","last_peered":"2026-03-09T14:00:57.125515+0000","last_clean":"2026-03-09T14:00:57.125515+0000","last_became_active":"2026-03-09T14:00:54.152155+0000","last_became_peered":"2026-03-09T14:00:54.152155+0000","last_unstale":"2026-03-09T14:00:57.125515+0000","last_undegraded":"2026-03-09T14:00:57.125515+0000","last_fullsized":"2026-03-09T14:00:57.125515+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:11:31.573364+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188612+0000","last_change":"2026-03-09T14:00:56.637905+0000","last_active":"2026-03-09T14:01:01.188612+0000","last_peered":"2026-03-09T14:01:01.188612+0000","last_clean":"2026-03-09T14:01:01.188612+0000","last_became_active":"2026-03-09T14:00:56.636642+0000","last_became_peered":"2026-03-09T14:00:56.636642+0000","last_unstale":"2026-03-09T14:01:01.188612+0000","last_undegraded":"2026-03-09T14:01:01.188612+0000","last_fullsized":"2026-03-09T14:01:01.188612+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:21:33.251755+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190941+0000","last_change":"2026-03-09T14:00:50.131269+0000","last_active":"2026-03-09T14:01:01.190941+0000","last_peered":"2026-03-09T14:01:01.190941+0000","last_clean":"2026-03-09T14:01:01.190941+0000","last_became_active":"2026-03-09T14:00:50.131165+0000","last_became_peered":"2026-03-09T14:00:50.131165+0000","last_unstale":"2026-03-09T14:01:01.190941+0000","last_undegraded":"2026-03-09T14:01:01.190941+0000","last_fullsized":"2026-03-09T14:01:01.190941+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:32:39.823468+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1d","version":"54'12","reported_seq":46,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188652+0000","last_change":"2026-03-09T14:00:52.132675+0000","last_active":"2026-03-09T14:01:01.188652+0000","last_peered":"2026-03-09T14:01:01.188652+0000","last_clean":"2026-03-09T14:01:01.188652+0000","last_became_active":"2026-03-09T14:00:52.132573+0000","last_became_peered":"2026-03-09T14:00:52.132573+0000","last_unstale":"2026-03-09T14:01:01.188652+0000","last_undegraded":"2026-03-09T14:01:01.188652+0000","last_fullsized":"2026-03-09T14:01:01.188652+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:58:59.467750+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190831+0000","last_change":"2026-03-09T14:00:54.141172+0000","last_active":"2026-03-09T14:01:01.190831+0000","last_peered":"2026-03-09T14:01:01.190831+0000","last_clean":"2026-03-09T14:01:01.190831+0000","last_became_active":"2026-03-09T14:00:54.141019+0000","last_became_peered":"2026-03-09T14:00:54.141019+0000","last_unstale":"2026-03-09T14:01:01.190831+0000","last_undegraded":"2026-03-09T14:01:01.190831+0000","last_fullsized":"2026-03-09T14:01:01.190831+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:23:31.004418+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1c","version":"54'1","reported_seq":14,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.160130+0000","last_change":"2026-03-09T14:00:56.154429+0000","last_active":"2026-03-09T14:00:57.160130+0000","last_peered":"2026-03-09T14:00:57.160130+0000","last_clean":"2026-03-09T14:00:57.160130+0000","last_became_active":"2026-03-09T14:00:56.154062+0000","last_became_peered":"2026-03-09T14:00:56.154062+0000","last_unstale":"2026-03-09T14:00:57.160130+0000","last_undegraded":"2026-03-09T14:00:57.160130+0000","last_fullsized":"2026-03-09T14:00:57.160130+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:00:25.612543+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"47'1","reported_seq":26,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125761+0000","last_change":"2026-03-09T14:00:50.120949+0000","last_active":"2026-03-09T14:00:57.125761+0000","last_peered":"2026-03-09T14:00:57.125761+0000","last_clean":"2026-03-09T14:00:57.125761+0000","last_became_active":"2026-03-09T14:00:50.120823+0000","last_became_peered":"2026-03-09T14:00:50.120823+0000","last_unstale":"2026-03-09T14:00:57.125761+0000","last_undegraded":"2026-03-09T14:00:57.125761+0000","last_fullsized":"2026-03-09T14:00:57.125761+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:12:03.943354+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.1e","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.069929+0000","last_change":"2026-03-09T14:00:52.127212+0000","last_active":"2026-03-09T14:00:58.069929+0000","last_peered":"2026-03-09T14:00:58.069929+0000","last_clean":"2026-03-09T14:00:58.069929+0000","last_became_active":"2026-03-09T14:00:52.127102+0000","last_became_peered":"2026-03-09T14:00:52.127102+0000","last_unstale":"2026-03-09T14:00:58.069929+0000","last_undegraded":"2026-03-09T14:00:58.069929+0000","last_fullsized":"2026-03-09T14:00:58.069929+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:37:17.757264+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1f","version":"54'8","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.189088+0000","last_change":"2026-03-09T14:00:54.152260+0000","last_active":"2026-03-09T14:01:01.189088+0000","last_peered":"2026-03-09T14:01:01.189088+0000","last_clean":"2026-03-09T14:01:01.189088+0000","last_became_active":"2026-03-09T14:00:54.152101+0000","last_became_peered":"2026-03-09T14:00:54.152101+0000","last_unstale":"2026-03-09T14:01:01.189088+0000","last_undegraded":"2026-03-09T14:01:01.189088+0000","last_fullsized":"2026-03-09T14:01:01.189088+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:21:46.931801+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.f","version":"54'15","reported_seq":46,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.100259+0000","last_change":"2026-03-09T14:00:52.166393+0000","last_active":"2026-03-09T14:00:58.100259+0000","last_peered":"2026-03-09T14:00:58.100259+0000","last_clean":"2026-03-09T14:00:58.100259+0000","last_became_active":"2026-03-09T14:00:52.166001+0000","last_became_peered":"2026-03-09T14:00:52.166001+0000","last_unstale":"2026-03-09T14:00:58.100259+0000","last_undegraded":"2026-03-09T14:00:58.100259+0000","last_fullsized":"2026-03-09T14:00:58.100259+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:28:23.678927+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.8","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188056+0000","last_change":"2026-03-09T14:00:50.135895+0000","last_active":"2026-03-09T14:01:01.188056+0000","last_peered":"2026-03-09T14:01:01.188056+0000","last_clean":"2026-03-09T14:01:01.188056+0000","last_became_active":"2026-03-09T14:00:50.135528+0000","last_became_peered":"2026-03-09T14:00:50.135528+0000","last_unstale":"2026-03-09T14:01:01.188056+0000","last_undegraded":"2026-03-09T14:01:01.188056+0000","last_fullsized":"2026-03-09T14:01:01.188056+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:07:58.161997+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.e","version":"54'8","reported_seq":30,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191393+0000","last_change":"2026-03-09T14:00:54.141114+0000","last_active":"2026-03-09T14:01:01.191393+0000","last_peered":"2026-03-09T14:01:01.191393+0000","last_clean":"2026-03-09T14:01:01.191393+0000","last_became_active":"2026-03-09T14:00:54.140873+0000","last_became_peered":"2026-03-09T14:00:54.140873+0000","last_unstale":"2026-03-09T14:01:01.191393+0000","last_undegraded":"2026-03-09T14:01:01.191393+0000","last_fullsized":"2026-03-09T14:01:01.191393+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:00:00.630776+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.022152+0000","last_change":"2026-03-09T14:00:56.134867+0000","last_active":"2026-03-09T14:01:01.022152+0000","last_peered":"2026-03-09T14:01:01.022152+0000","last_clean":"2026-03-09T14:01:01.022152+0000","last_became_active":"2026-03-09T14:00:56.134770+0000","last_became_peered":"2026-03-09T14:00:56.134770+0000","last_unstale":"2026-03-09T14:01:01.022152+0000","last_undegraded":"2026-03-09T14:01:01.022152+0000","last_fullsized":"2026-03-09T14:01:01.022152+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:43:00.327917+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.0","version":"54'18","reported_seq":55,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188152+0000","last_change":"2026-03-09T14:00:52.160966+0000","last_active":"2026-03-09T14:01:01.188152+0000","last_peered":"2026-03-09T14:01:01.188152+0000","last_clean":"2026-03-09T14:01:01.188152+0000","last_became_active":"2026-03-09T14:00:52.160096+0000","last_became_peered":"2026-03-09T14:00:52.160096+0000","last_unstale":"2026-03-09T14:01:01.188152+0000","last_undegraded":"2026-03-09T14:01:01.188152+0000","last_fullsized":"2026-03-09T14:01:01.188152+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:14:28.631322+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.7","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188125+0000","last_change":"2026-03-09T14:00:50.123500+0000","last_active":"2026-03-09T14:01:01.188125+0000","last_peered":"2026-03-09T14:01:01.188125+0000","last_clean":"2026-03-09T14:01:01.188125+0000","last_became_active":"2026-03-09T14:00:50.123416+0000","last_became_peered":"2026-03-09T14:00:50.123416+0000","last_unstale":"2026-03-09T14:01:01.188125+0000","last_undegraded":"2026-03-09T14:01:01.188125+0000","last_fullsized":"2026-03-09T14:01:01.188125+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:53:53.532715+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190511+0000","last_change":"2026-03-09T14:00:54.155414+0000","last_active":"2026-03-09T14:01:01.190511+0000","last_peered":"2026-03-09T14:01:01.190511+0000","last_clean":"2026-03-09T14:01:01.190511+0000","last_became_active":"2026-03-09T14:00:54.154137+0000","last_became_peered":"2026-03-09T14:00:54.154137+0000","last_unstale":"2026-03-09T14:01:01.190511+0000","last_undegraded":"2026-03-09T14:01:01.190511+0000","last_fullsized":"2026-03-09T14:01:01.190511+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:20:51.895356+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190527+0000","last_change":"2026-03-09T14:00:56.149661+0000","last_active":"2026-03-09T14:01:01.190527+0000","last_peered":"2026-03-09T14:01:01.190527+0000","last_clean":"2026-03-09T14:01:01.190527+0000","last_became_active":"2026-03-09T14:00:56.149575+0000","last_became_peered":"2026-03-09T14:00:56.149575+0000","last_unstale":"2026-03-09T14:01:01.190527+0000","last_undegraded":"2026-03-09T14:01:01.190527+0000","last_fullsized":"2026-03-09T14:01:01.190527+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:20:47.099958+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1","version":"54'14","reported_seq":44,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191342+0000","last_change":"2026-03-09T14:00:52.148403+0000","last_active":"2026-03-09T14:01:01.191342+0000","last_peered":"2026-03-09T14:01:01.191342+0000","last_clean":"2026-03-09T14:01:01.191342+0000","last_became_active":"2026-03-09T14:00:52.148224+0000","last_became_peered":"2026-03-09T14:00:52.148224+0000","last_unstale":"2026-03-09T14:01:01.191342+0000","last_undegraded":"2026-03-09T14:01:01.191342+0000","last_fullsized":"2026-03-09T14:01:01.191342+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:30:05.259006+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.6","version":"47'1","reported_seq":26,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137602+0000","last_change":"2026-03-09T14:00:50.134015+0000","last_active":"2026-03-09T14:00:57.137602+0000","last_peered":"2026-03-09T14:00:57.137602+0000","last_clean":"2026-03-09T14:00:57.137602+0000","last_became_active":"2026-03-09T14:00:50.133924+0000","last_became_peered":"2026-03-09T14:00:50.133924+0000","last_unstale":"2026-03-09T14:00:57.137602+0000","last_undegraded":"2026-03-09T14:00:57.137602+0000","last_fullsized":"2026-03-09T14:00:57.137602+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:46:25.566149+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.0","version":"54'8","reported_seq":30,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187643+0000","last_change":"2026-03-09T14:00:54.141114+0000","last_active":"2026-03-09T14:01:01.187643+0000","last_peered":"2026-03-09T14:01:01.187643+0000","last_clean":"2026-03-09T14:01:01.187643+0000","last_became_active":"2026-03-09T14:00:54.140736+0000","last_became_peered":"2026-03-09T14:00:54.140736+0000","last_unstale":"2026-03-09T14:01:01.187643+0000","last_undegraded":"2026-03-09T14:01:01.187643+0000","last_fullsized":"2026-03-09T14:01:01.187643+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:34:35.726416+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159258+0000","last_change":"2026-03-09T14:00:56.636175+0000","last_active":"2026-03-09T14:00:57.159258+0000","last_peered":"2026-03-09T14:00:57.159258+0000","last_clean":"2026-03-09T14:00:57.159258+0000","last_became_active":"2026-03-09T14:00:56.636075+0000","last_became_peered":"2026-03-09T14:00:56.636075+0000","last_unstale":"2026-03-09T14:00:57.159258+0000","last_undegraded":"2026-03-09T14:00:57.159258+0000","last_fullsized":"2026-03-09T14:00:57.159258+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:52:58.410103+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.2","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.052022+0000","last_change":"2026-03-09T14:00:52.166468+0000","last_active":"2026-03-09T14:00:58.052022+0000","last_peered":"2026-03-09T14:00:58.052022+0000","last_clean":"2026-03-09T14:00:58.052022+0000","last_became_active":"2026-03-09T14:00:52.166126+0000","last_became_peered":"2026-03-09T14:00:52.166126+0000","last_unstale":"2026-03-09T14:00:58.052022+0000","last_undegraded":"2026-03-09T14:00:58.052022+0000","last_fullsized":"2026-03-09T14:00:58.052022+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:09:52.546608+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021665+0000","last_change":"2026-03-09T14:00:50.123135+0000","last_active":"2026-03-09T14:01:01.021665+0000","last_peered":"2026-03-09T14:01:01.021665+0000","last_clean":"2026-03-09T14:01:01.021665+0000","last_became_active":"2026-03-09T14:00:50.123018+0000","last_became_peered":"2026-03-09T14:00:50.123018+0000","last_unstale":"2026-03-09T14:01:01.021665+0000","last_undegraded":"2026-03-09T14:01:01.021665+0000","last_fullsized":"2026-03-09T14:01:01.021665+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:06:04.376656+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.3","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.608188+0000","last_change":"2026-03-09T14:00:54.147021+0000","last_active":"2026-03-09T14:00:57.608188+0000","last_peered":"2026-03-09T14:00:57.608188+0000","last_clean":"2026-03-09T14:00:57.608188+0000","last_became_active":"2026-03-09T14:00:54.146860+0000","last_became_peered":"2026-03-09T14:00:54.146860+0000","last_unstale":"2026-03-09T14:00:57.608188+0000","last_undegraded":"2026-03-09T14:00:57.608188+0000","last_fullsized":"2026-03-09T14:00:57.608188+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:55:04.417905+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.138147+0000","last_change":"2026-03-09T14:00:56.146036+0000","last_active":"2026-03-09T14:00:57.138147+0000","last_peered":"2026-03-09T14:00:57.138147+0000","last_clean":"2026-03-09T14:00:57.138147+0000","last_became_active":"2026-03-09T14:00:56.145880+0000","last_became_peered":"2026-03-09T14:00:56.145880+0000","last_unstale":"2026-03-09T14:00:57.138147+0000","last_undegraded":"2026-03-09T14:00:57.138147+0000","last_fullsized":"2026-03-09T14:00:57.138147+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:27:12.701649+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.3","version":"54'19","reported_seq":57,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.039740+0000","last_change":"2026-03-09T14:00:52.133325+0000","last_active":"2026-03-09T14:00:58.039740+0000","last_peered":"2026-03-09T14:00:58.039740+0000","last_clean":"2026-03-09T14:00:58.039740+0000","last_became_active":"2026-03-09T14:00:52.133203+0000","last_became_peered":"2026-03-09T14:00:52.133203+0000","last_unstale":"2026-03-09T14:00:58.039740+0000","last_undegraded":"2026-03-09T14:00:58.039740+0000","last_fullsized":"2026-03-09T14:00:58.039740+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:13:43.113266+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,7],"acting":[0,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.4","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125832+0000","last_change":"2026-03-09T14:00:50.116304+0000","last_active":"2026-03-09T14:00:57.125832+0000","last_peered":"2026-03-09T14:00:57.125832+0000","last_clean":"2026-03-09T14:00:57.125832+0000","last_became_active":"2026-03-09T14:00:50.116114+0000","last_became_peered":"2026-03-09T14:00:50.116114+0000","last_unstale":"2026-03-09T14:00:57.125832+0000","last_undegraded":"2026-03-09T14:00:57.125832+0000","last_fullsized":"2026-03-09T14:00:57.125832+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:19:15.901445+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.189086+0000","last_change":"2026-03-09T14:00:54.153898+0000","last_active":"2026-03-09T14:01:01.189086+0000","last_peered":"2026-03-09T14:01:01.189086+0000","last_clean":"2026-03-09T14:01:01.189086+0000","last_became_active":"2026-03-09T14:00:54.153796+0000","last_became_peered":"2026-03-09T14:00:54.153796+0000","last_unstale":"2026-03-09T14:01:01.189086+0000","last_undegraded":"2026-03-09T14:01:01.189086+0000","last_fullsized":"2026-03-09T14:01:01.189086+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:19:54.805292+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125816+0000","last_change":"2026-03-09T14:00:56.634767+0000","last_active":"2026-03-09T14:00:57.125816+0000","last_peered":"2026-03-09T14:00:57.125816+0000","last_clean":"2026-03-09T14:00:57.125816+0000","last_became_active":"2026-03-09T14:00:56.634564+0000","last_became_peered":"2026-03-09T14:00:56.634564+0000","last_unstale":"2026-03-09T14:00:57.125816+0000","last_undegraded":"2026-03-09T14:00:57.125816+0000","last_fullsized":"2026-03-09T14:00:57.125816+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:47:42.732204+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.4","version":"54'28","reported_seq":71,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.055650+0000","last_change":"2026-03-09T14:00:52.128982+0000","last_active":"2026-03-09T14:00:58.055650+0000","last_peered":"2026-03-09T14:00:58.055650+0000","last_clean":"2026-03-09T14:00:58.055650+0000","last_became_active":"2026-03-09T14:00:52.128911+0000","last_became_peered":"2026-03-09T14:00:52.128911+0000","last_unstale":"2026-03-09T14:00:58.055650+0000","last_undegraded":"2026-03-09T14:00:58.055650+0000","last_fullsized":"2026-03-09T14:00:58.055650+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":28,"log_dups_size":0,"ondisk_log_size":28,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:15:34.540498+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":48,"num_read_kb":33,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,3],"acting":[1,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.3","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191109+0000","last_change":"2026-03-09T14:00:50.117793+0000","last_active":"2026-03-09T14:01:01.191109+0000","last_peered":"2026-03-09T14:01:01.191109+0000","last_clean":"2026-03-09T14:01:01.191109+0000","last_became_active":"2026-03-09T14:00:50.117627+0000","last_became_peered":"2026-03-09T14:00:50.117627+0000","last_unstale":"2026-03-09T14:01:01.191109+0000","last_undegraded":"2026-03-09T14:01:01.191109+0000","last_fullsized":"2026-03-09T14:01:01.191109+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:52:05.685526+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"49'2","reported_seq":34,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021747+0000","last_change":"2026-03-09T14:00:52.104230+0000","last_active":"2026-03-09T14:01:01.021747+0000","last_peered":"2026-03-09T14:01:01.021747+0000","last_clean":"2026-03-09T14:01:01.021747+0000","last_became_active":"2026-03-09T14:00:50.102127+0000","last_became_peered":"2026-03-09T14:00:50.102127+0000","last_unstale":"2026-03-09T14:01:01.021747+0000","last_undegraded":"2026-03-09T14:01:01.021747+0000","last_fullsized":"2026-03-09T14:01:01.021747+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:11:24.708206+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00042748999999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137954+0000","last_change":"2026-03-09T14:00:54.138601+0000","last_active":"2026-03-09T14:00:57.137954+0000","last_peered":"2026-03-09T14:00:57.137954+0000","last_clean":"2026-03-09T14:00:57.137954+0000","last_became_active":"2026-03-09T14:00:54.138486+0000","last_became_peered":"2026-03-09T14:00:54.138486+0000","last_unstale":"2026-03-09T14:00:57.137954+0000","last_undegraded":"2026-03-09T14:00:57.137954+0000","last_fullsized":"2026-03-09T14:00:57.137954+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:29:04.810641+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187957+0000","last_change":"2026-03-09T14:00:56.156882+0000","last_active":"2026-03-09T14:01:01.187957+0000","last_peered":"2026-03-09T14:01:01.187957+0000","last_clean":"2026-03-09T14:01:01.187957+0000","last_became_active":"2026-03-09T14:00:56.156683+0000","last_became_peered":"2026-03-09T14:00:56.156683+0000","last_unstale":"2026-03-09T14:01:01.187957+0000","last_undegraded":"2026-03-09T14:01:01.187957+0000","last_fullsized":"2026-03-09T14:01:01.187957+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:03:33.439516+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.7","version":"54'13","reported_seq":48,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.956623+0000","last_change":"2026-03-09T14:00:52.114187+0000","last_active":"2026-03-09T14:00:57.956623+0000","last_peered":"2026-03-09T14:00:57.956623+0000","last_clean":"2026-03-09T14:00:57.956623+0000","last_became_active":"2026-03-09T14:00:52.114105+0000","last_became_peered":"2026-03-09T14:00:52.114105+0000","last_unstale":"2026-03-09T14:00:57.956623+0000","last_undegraded":"2026-03-09T14:00:57.956623+0000","last_fullsized":"2026-03-09T14:00:57.956623+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:22:31.607401+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.0","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125857+0000","last_change":"2026-03-09T14:00:50.116223+0000","last_active":"2026-03-09T14:00:57.125857+0000","last_peered":"2026-03-09T14:00:57.125857+0000","last_clean":"2026-03-09T14:00:57.125857+0000","last_became_active":"2026-03-09T14:00:50.115995+0000","last_became_peered":"2026-03-09T14:00:50.115995+0000","last_unstale":"2026-03-09T14:00:57.125857+0000","last_undegraded":"2026-03-09T14:00:57.125857+0000","last_fullsized":"2026-03-09T14:00:57.125857+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:37:46.722962+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"47'1","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034672+0000","last_change":"2026-03-09T14:00:52.116177+0000","last_active":"2026-03-09T14:01:01.034672+0000","last_peered":"2026-03-09T14:01:01.034672+0000","last_clean":"2026-03-09T14:01:01.034672+0000","last_became_active":"2026-03-09T14:00:50.118462+0000","last_became_peered":"2026-03-09T14:00:50.118462+0000","last_unstale":"2026-03-09T14:01:01.034672+0000","last_undegraded":"2026-03-09T14:01:01.034672+0000","last_fullsized":"2026-03-09T14:01:01.034672+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:23:53.069926+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019892299999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034762+0000","last_change":"2026-03-09T14:00:54.133283+0000","last_active":"2026-03-09T14:01:01.034762+0000","last_peered":"2026-03-09T14:01:01.034762+0000","last_clean":"2026-03-09T14:01:01.034762+0000","last_became_active":"2026-03-09T14:00:54.133091+0000","last_became_peered":"2026-03-09T14:00:54.133091+0000","last_unstale":"2026-03-09T14:01:01.034762+0000","last_undegraded":"2026-03-09T14:01:01.034762+0000","last_fullsized":"2026-03-09T14:01:01.034762+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:53:27.882227+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159415+0000","last_change":"2026-03-09T14:00:56.635856+0000","last_active":"2026-03-09T14:00:57.159415+0000","last_peered":"2026-03-09T14:00:57.159415+0000","last_clean":"2026-03-09T14:00:57.159415+0000","last_became_active":"2026-03-09T14:00:56.635723+0000","last_became_peered":"2026-03-09T14:00:56.635723+0000","last_unstale":"2026-03-09T14:00:57.159415+0000","last_undegraded":"2026-03-09T14:00:57.159415+0000","last_fullsized":"2026-03-09T14:00:57.159415+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:01:16.080642+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.6","version":"54'12","reported_seq":39,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.977720+0000","last_change":"2026-03-09T14:00:52.120372+0000","last_active":"2026-03-09T14:00:57.977720+0000","last_peered":"2026-03-09T14:00:57.977720+0000","last_clean":"2026-03-09T14:00:57.977720+0000","last_became_active":"2026-03-09T14:00:52.120277+0000","last_became_peered":"2026-03-09T14:00:52.120277+0000","last_unstale":"2026-03-09T14:00:57.977720+0000","last_undegraded":"2026-03-09T14:00:57.977720+0000","last_fullsized":"2026-03-09T14:00:57.977720+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:06:01.080244+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,2],"acting":[0,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137637+0000","last_change":"2026-03-09T14:00:50.117479+0000","last_active":"2026-03-09T14:00:57.137637+0000","last_peered":"2026-03-09T14:00:57.137637+0000","last_clean":"2026-03-09T14:00:57.137637+0000","last_became_active":"2026-03-09T14:00:50.117261+0000","last_became_peered":"2026-03-09T14:00:50.117261+0000","last_unstale":"2026-03-09T14:00:57.137637+0000","last_undegraded":"2026-03-09T14:00:57.137637+0000","last_fullsized":"2026-03-09T14:00:57.137637+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:36:38.275968+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"54'5","reported_seq":41,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:59.953618+0000","last_change":"2026-03-09T14:00:52.110129+0000","last_active":"2026-03-09T14:00:59.953618+0000","last_peered":"2026-03-09T14:00:59.953618+0000","last_clean":"2026-03-09T14:00:59.953618+0000","last_became_active":"2026-03-09T14:00:50.132504+0000","last_became_peered":"2026-03-09T14:00:50.132504+0000","last_unstale":"2026-03-09T14:00:59.953618+0000","last_undegraded":"2026-03-09T14:00:59.953618+0000","last_fullsized":"2026-03-09T14:00:59.953618+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:34:20.330004+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00032990800000000001,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":8,"num_read_kb":3,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"5.7","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021473+0000","last_change":"2026-03-09T14:00:54.129625+0000","last_active":"2026-03-09T14:01:01.021473+0000","last_peered":"2026-03-09T14:01:01.021473+0000","last_clean":"2026-03-09T14:01:01.021473+0000","last_became_active":"2026-03-09T14:00:54.129525+0000","last_became_peered":"2026-03-09T14:00:54.129525+0000","last_unstale":"2026-03-09T14:01:01.021473+0000","last_undegraded":"2026-03-09T14:01:01.021473+0000","last_fullsized":"2026-03-09T14:01:01.021473+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:11:16.026801+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125694+0000","last_change":"2026-03-09T14:00:56.149382+0000","last_active":"2026-03-09T14:00:57.125694+0000","last_peered":"2026-03-09T14:00:57.125694+0000","last_clean":"2026-03-09T14:00:57.125694+0000","last_became_active":"2026-03-09T14:00:56.149261+0000","last_became_peered":"2026-03-09T14:00:56.149261+0000","last_unstale":"2026-03-09T14:00:57.125694+0000","last_undegraded":"2026-03-09T14:00:57.125694+0000","last_fullsized":"2026-03-09T14:00:57.125694+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:04:11.080870+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.5","version":"54'16","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188430+0000","last_change":"2026-03-09T14:00:52.161099+0000","last_active":"2026-03-09T14:01:01.188430+0000","last_peered":"2026-03-09T14:01:01.188430+0000","last_clean":"2026-03-09T14:01:01.188430+0000","last_became_active":"2026-03-09T14:00:52.161016+0000","last_became_peered":"2026-03-09T14:00:52.161016+0000","last_unstale":"2026-03-09T14:01:01.188430+0000","last_undegraded":"2026-03-09T14:01:01.188430+0000","last_fullsized":"2026-03-09T14:01:01.188430+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:53:21.492144+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.2","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188214+0000","last_change":"2026-03-09T14:00:50.125364+0000","last_active":"2026-03-09T14:01:01.188214+0000","last_peered":"2026-03-09T14:01:01.188214+0000","last_clean":"2026-03-09T14:01:01.188214+0000","last_became_active":"2026-03-09T14:00:50.124974+0000","last_became_peered":"2026-03-09T14:00:50.124974+0000","last_unstale":"2026-03-09T14:01:01.188214+0000","last_undegraded":"2026-03-09T14:01:01.188214+0000","last_fullsized":"2026-03-09T14:01:01.188214+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:30:36.803487+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"18'32","reported_seq":35,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159895+0000","last_change":"2026-03-09T14:00:48.395001+0000","last_active":"2026-03-09T14:00:57.159895+0000","last_peered":"2026-03-09T14:00:57.159895+0000","last_clean":"2026-03-09T14:00:57.159895+0000","last_became_active":"2026-03-09T14:00:48.087092+0000","last_became_peered":"2026-03-09T14:00:48.087092+0000","last_unstale":"2026-03-09T14:00:57.159895+0000","last_undegraded":"2026-03-09T14:00:57.159895+0000","last_fullsized":"2026-03-09T14:00:57.159895+0000","mapping_epoch":44,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":45,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T13:59:57.511953+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T13:59:57.511953+0000","last_clean_scrub_stamp":"2026-03-09T13:59:57.511953+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:07:15.605892+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159936+0000","last_change":"2026-03-09T14:00:54.142968+0000","last_active":"2026-03-09T14:00:57.159936+0000","last_peered":"2026-03-09T14:00:57.159936+0000","last_clean":"2026-03-09T14:00:57.159936+0000","last_became_active":"2026-03-09T14:00:54.142859+0000","last_became_peered":"2026-03-09T14:00:54.142859+0000","last_unstale":"2026-03-09T14:00:57.159936+0000","last_undegraded":"2026-03-09T14:00:57.159936+0000","last_fullsized":"2026-03-09T14:00:57.159936+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:37:56.103462+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021236+0000","last_change":"2026-03-09T14:00:56.151080+0000","last_active":"2026-03-09T14:01:01.021236+0000","last_peered":"2026-03-09T14:01:01.021236+0000","last_clean":"2026-03-09T14:01:01.021236+0000","last_became_active":"2026-03-09T14:00:56.150858+0000","last_became_peered":"2026-03-09T14:00:56.150858+0000","last_unstale":"2026-03-09T14:01:01.021236+0000","last_undegraded":"2026-03-09T14:01:01.021236+0000","last_fullsized":"2026-03-09T14:01:01.021236+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:56:57.461325+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.e","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191037+0000","last_change":"2026-03-09T14:00:52.163958+0000","last_active":"2026-03-09T14:01:01.191037+0000","last_peered":"2026-03-09T14:01:01.191037+0000","last_clean":"2026-03-09T14:01:01.191037+0000","last_became_active":"2026-03-09T14:00:52.163868+0000","last_became_peered":"2026-03-09T14:00:52.163868+0000","last_unstale":"2026-03-09T14:01:01.191037+0000","last_undegraded":"2026-03-09T14:01:01.191037+0000","last_fullsized":"2026-03-09T14:01:01.191037+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:39:36.393630+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.9","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191029+0000","last_change":"2026-03-09T14:00:50.121159+0000","last_active":"2026-03-09T14:01:01.191029+0000","last_peered":"2026-03-09T14:01:01.191029+0000","last_clean":"2026-03-09T14:01:01.191029+0000","last_became_active":"2026-03-09T14:00:50.120617+0000","last_became_peered":"2026-03-09T14:00:50.120617+0000","last_unstale":"2026-03-09T14:01:01.191029+0000","last_undegraded":"2026-03-09T14:01:01.191029+0000","last_fullsized":"2026-03-09T14:01:01.191029+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:34:58.667845+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.022044+0000","last_change":"2026-03-09T14:00:54.146444+0000","last_active":"2026-03-09T14:01:01.022044+0000","last_peered":"2026-03-09T14:01:01.022044+0000","last_clean":"2026-03-09T14:01:01.022044+0000","last_became_active":"2026-03-09T14:00:54.146336+0000","last_became_peered":"2026-03-09T14:00:54.146336+0000","last_unstale":"2026-03-09T14:01:01.022044+0000","last_undegraded":"2026-03-09T14:01:01.022044+0000","last_fullsized":"2026-03-09T14:01:01.022044+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:21:06.037240+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187564+0000","last_change":"2026-03-09T14:00:56.636903+0000","last_active":"2026-03-09T14:01:01.187564+0000","last_peered":"2026-03-09T14:01:01.187564+0000","last_clean":"2026-03-09T14:01:01.187564+0000","last_became_active":"2026-03-09T14:00:56.634798+0000","last_became_peered":"2026-03-09T14:00:56.634798+0000","last_unstale":"2026-03-09T14:01:01.187564+0000","last_undegraded":"2026-03-09T14:01:01.187564+0000","last_fullsized":"2026-03-09T14:01:01.187564+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:25:39.147077+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.d","version":"54'17","reported_seq":51,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190790+0000","last_change":"2026-03-09T14:00:52.164490+0000","last_active":"2026-03-09T14:01:01.190790+0000","last_peered":"2026-03-09T14:01:01.190790+0000","last_clean":"2026-03-09T14:01:01.190790+0000","last_became_active":"2026-03-09T14:00:52.164421+0000","last_became_peered":"2026-03-09T14:00:52.164421+0000","last_unstale":"2026-03-09T14:01:01.190790+0000","last_undegraded":"2026-03-09T14:01:01.190790+0000","last_fullsized":"2026-03-09T14:01:01.190790+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:52:43.959871+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,1],"acting":[4,2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.a","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188543+0000","last_change":"2026-03-09T14:00:50.114758+0000","last_active":"2026-03-09T14:01:01.188543+0000","last_peered":"2026-03-09T14:01:01.188543+0000","last_clean":"2026-03-09T14:01:01.188543+0000","last_became_active":"2026-03-09T14:00:50.112071+0000","last_became_peered":"2026-03-09T14:00:50.112071+0000","last_unstale":"2026-03-09T14:01:01.188543+0000","last_undegraded":"2026-03-09T14:01:01.188543+0000","last_fullsized":"2026-03-09T14:01:01.188543+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:53:52.666574+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125684+0000","last_change":"2026-03-09T14:00:54.152303+0000","last_active":"2026-03-09T14:00:57.125684+0000","last_peered":"2026-03-09T14:00:57.125684+0000","last_clean":"2026-03-09T14:00:57.125684+0000","last_became_active":"2026-03-09T14:00:54.152198+0000","last_became_peered":"2026-03-09T14:00:54.152198+0000","last_unstale":"2026-03-09T14:00:57.125684+0000","last_undegraded":"2026-03-09T14:00:57.125684+0000","last_fullsized":"2026-03-09T14:00:57.125684+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:10:22.025419+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034810+0000","last_change":"2026-03-09T14:00:56.148068+0000","last_active":"2026-03-09T14:01:01.034810+0000","last_peered":"2026-03-09T14:01:01.034810+0000","last_clean":"2026-03-09T14:01:01.034810+0000","last_became_active":"2026-03-09T14:00:56.147965+0000","last_became_peered":"2026-03-09T14:00:56.147965+0000","last_unstale":"2026-03-09T14:01:01.034810+0000","last_undegraded":"2026-03-09T14:01:01.034810+0000","last_fullsized":"2026-03-09T14:01:01.034810+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:36:04.892964+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.c","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191455+0000","last_change":"2026-03-09T14:00:52.147537+0000","last_active":"2026-03-09T14:01:01.191455+0000","last_peered":"2026-03-09T14:01:01.191455+0000","last_clean":"2026-03-09T14:01:01.191455+0000","last_became_active":"2026-03-09T14:00:52.145967+0000","last_became_peered":"2026-03-09T14:00:52.145967+0000","last_unstale":"2026-03-09T14:01:01.191455+0000","last_undegraded":"2026-03-09T14:01:01.191455+0000","last_fullsized":"2026-03-09T14:01:01.191455+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:45:45.425488+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,6],"acting":[4,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.b","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188021+0000","last_change":"2026-03-09T14:00:50.135234+0000","last_active":"2026-03-09T14:01:01.188021+0000","last_peered":"2026-03-09T14:01:01.188021+0000","last_clean":"2026-03-09T14:01:01.188021+0000","last_became_active":"2026-03-09T14:00:50.122241+0000","last_became_peered":"2026-03-09T14:00:50.122241+0000","last_unstale":"2026-03-09T14:01:01.188021+0000","last_undegraded":"2026-03-09T14:01:01.188021+0000","last_fullsized":"2026-03-09T14:01:01.188021+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:17:22.617123+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.d","version":"54'8","reported_seq":30,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034974+0000","last_change":"2026-03-09T14:00:54.133340+0000","last_active":"2026-03-09T14:01:01.034974+0000","last_peered":"2026-03-09T14:01:01.034974+0000","last_clean":"2026-03-09T14:01:01.034974+0000","last_became_active":"2026-03-09T14:00:54.133218+0000","last_became_peered":"2026-03-09T14:00:54.133218+0000","last_unstale":"2026-03-09T14:01:01.034974+0000","last_undegraded":"2026-03-09T14:01:01.034974+0000","last_fullsized":"2026-03-09T14:01:01.034974+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:30:05.926433+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191239+0000","last_change":"2026-03-09T14:00:56.148535+0000","last_active":"2026-03-09T14:01:01.191239+0000","last_peered":"2026-03-09T14:01:01.191239+0000","last_clean":"2026-03-09T14:01:01.191239+0000","last_became_active":"2026-03-09T14:00:56.142713+0000","last_became_peered":"2026-03-09T14:00:56.142713+0000","last_unstale":"2026-03-09T14:01:01.191239+0000","last_undegraded":"2026-03-09T14:01:01.191239+0000","last_fullsized":"2026-03-09T14:01:01.191239+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:21:23.550254+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.b","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.033820+0000","last_change":"2026-03-09T14:00:52.132547+0000","last_active":"2026-03-09T14:00:58.033820+0000","last_peered":"2026-03-09T14:00:58.033820+0000","last_clean":"2026-03-09T14:00:58.033820+0000","last_became_active":"2026-03-09T14:00:52.132444+0000","last_became_peered":"2026-03-09T14:00:52.132444+0000","last_unstale":"2026-03-09T14:00:58.033820+0000","last_undegraded":"2026-03-09T14:00:58.033820+0000","last_fullsized":"2026-03-09T14:00:58.033820+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:50:33.124223+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021581+0000","last_change":"2026-03-09T14:00:50.120381+0000","last_active":"2026-03-09T14:01:01.021581+0000","last_peered":"2026-03-09T14:01:01.021581+0000","last_clean":"2026-03-09T14:01:01.021581+0000","last_became_active":"2026-03-09T14:00:50.116004+0000","last_became_peered":"2026-03-09T14:00:50.116004+0000","last_unstale":"2026-03-09T14:01:01.021581+0000","last_undegraded":"2026-03-09T14:01:01.021581+0000","last_fullsized":"2026-03-09T14:01:01.021581+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:03:16.730400+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034869+0000","last_change":"2026-03-09T14:00:54.137426+0000","last_active":"2026-03-09T14:01:01.034869+0000","last_peered":"2026-03-09T14:01:01.034869+0000","last_clean":"2026-03-09T14:01:01.034869+0000","last_became_active":"2026-03-09T14:00:54.137326+0000","last_became_peered":"2026-03-09T14:00:54.137326+0000","last_unstale":"2026-03-09T14:01:01.034869+0000","last_undegraded":"2026-03-09T14:01:01.034869+0000","last_fullsized":"2026-03-09T14:01:01.034869+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:02:08.599363+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.138017+0000","last_change":"2026-03-09T14:00:56.155493+0000","last_active":"2026-03-09T14:00:57.138017+0000","last_peered":"2026-03-09T14:00:57.138017+0000","last_clean":"2026-03-09T14:00:57.138017+0000","last_became_active":"2026-03-09T14:00:56.155369+0000","last_became_peered":"2026-03-09T14:00:56.155369+0000","last_unstale":"2026-03-09T14:00:57.138017+0000","last_undegraded":"2026-03-09T14:00:57.138017+0000","last_fullsized":"2026-03-09T14:00:57.138017+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:22:44.809797+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.a","version":"54'19","reported_seq":54,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188620+0000","last_change":"2026-03-09T14:00:52.160927+0000","last_active":"2026-03-09T14:01:01.188620+0000","last_peered":"2026-03-09T14:01:01.188620+0000","last_clean":"2026-03-09T14:01:01.188620+0000","last_became_active":"2026-03-09T14:00:52.160815+0000","last_became_peered":"2026-03-09T14:00:52.160815+0000","last_unstale":"2026-03-09T14:01:01.188620+0000","last_undegraded":"2026-03-09T14:01:01.188620+0000","last_fullsized":"2026-03-09T14:01:01.188620+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:20:10.337720+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,1,7],"acting":[6,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.d","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159827+0000","last_change":"2026-03-09T14:00:50.127310+0000","last_active":"2026-03-09T14:00:57.159827+0000","last_peered":"2026-03-09T14:00:57.159827+0000","last_clean":"2026-03-09T14:00:57.159827+0000","last_became_active":"2026-03-09T14:00:50.127218+0000","last_became_peered":"2026-03-09T14:00:50.127218+0000","last_unstale":"2026-03-09T14:00:57.159827+0000","last_undegraded":"2026-03-09T14:00:57.159827+0000","last_fullsized":"2026-03-09T14:00:57.159827+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:32:41.014774+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034929+0000","last_change":"2026-03-09T14:00:54.132366+0000","last_active":"2026-03-09T14:01:01.034929+0000","last_peered":"2026-03-09T14:01:01.034929+0000","last_clean":"2026-03-09T14:01:01.034929+0000","last_became_active":"2026-03-09T14:00:54.132275+0000","last_became_peered":"2026-03-09T14:00:54.132275+0000","last_unstale":"2026-03-09T14:01:01.034929+0000","last_undegraded":"2026-03-09T14:01:01.034929+0000","last_fullsized":"2026-03-09T14:01:01.034929+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:35:51.237218+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159806+0000","last_change":"2026-03-09T14:00:56.154569+0000","last_active":"2026-03-09T14:00:57.159806+0000","last_peered":"2026-03-09T14:00:57.159806+0000","last_clean":"2026-03-09T14:00:57.159806+0000","last_became_active":"2026-03-09T14:00:56.154230+0000","last_became_peered":"2026-03-09T14:00:56.154230+0000","last_unstale":"2026-03-09T14:00:57.159806+0000","last_undegraded":"2026-03-09T14:00:57.159806+0000","last_fullsized":"2026-03-09T14:00:57.159806+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:15:34.971763+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.9","version":"54'12","reported_seq":46,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191240+0000","last_change":"2026-03-09T14:00:52.163664+0000","last_active":"2026-03-09T14:01:01.191240+0000","last_peered":"2026-03-09T14:01:01.191240+0000","last_clean":"2026-03-09T14:01:01.191240+0000","last_became_active":"2026-03-09T14:00:52.162930+0000","last_became_peered":"2026-03-09T14:00:52.162930+0000","last_unstale":"2026-03-09T14:01:01.191240+0000","last_undegraded":"2026-03-09T14:01:01.191240+0000","last_fullsized":"2026-03-09T14:01:01.191240+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:14:58.698926+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,3],"acting":[4,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.e","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159698+0000","last_change":"2026-03-09T14:00:50.132760+0000","last_active":"2026-03-09T14:00:57.159698+0000","last_peered":"2026-03-09T14:00:57.159698+0000","last_clean":"2026-03-09T14:00:57.159698+0000","last_became_active":"2026-03-09T14:00:50.132628+0000","last_became_peered":"2026-03-09T14:00:50.132628+0000","last_unstale":"2026-03-09T14:00:57.159698+0000","last_undegraded":"2026-03-09T14:00:57.159698+0000","last_fullsized":"2026-03-09T14:00:57.159698+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:58:51.476383+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034843+0000","last_change":"2026-03-09T14:00:54.132344+0000","last_active":"2026-03-09T14:01:01.034843+0000","last_peered":"2026-03-09T14:01:01.034843+0000","last_clean":"2026-03-09T14:01:01.034843+0000","last_became_active":"2026-03-09T14:00:54.132258+0000","last_became_peered":"2026-03-09T14:00:54.132258+0000","last_unstale":"2026-03-09T14:01:01.034843+0000","last_undegraded":"2026-03-09T14:01:01.034843+0000","last_fullsized":"2026-03-09T14:01:01.034843+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:42:35.661374+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187732+0000","last_change":"2026-03-09T14:00:56.156981+0000","last_active":"2026-03-09T14:01:01.187732+0000","last_peered":"2026-03-09T14:01:01.187732+0000","last_clean":"2026-03-09T14:01:01.187732+0000","last_became_active":"2026-03-09T14:00:56.156600+0000","last_became_peered":"2026-03-09T14:00:56.156600+0000","last_unstale":"2026-03-09T14:01:01.187732+0000","last_undegraded":"2026-03-09T14:01:01.187732+0000","last_fullsized":"2026-03-09T14:01:01.187732+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:31:49.023626+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.8","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021851+0000","last_change":"2026-03-09T14:00:52.133676+0000","last_active":"2026-03-09T14:01:01.021851+0000","last_peered":"2026-03-09T14:01:01.021851+0000","last_clean":"2026-03-09T14:01:01.021851+0000","last_became_active":"2026-03-09T14:00:52.133525+0000","last_became_peered":"2026-03-09T14:00:52.133525+0000","last_unstale":"2026-03-09T14:01:01.021851+0000","last_undegraded":"2026-03-09T14:01:01.021851+0000","last_fullsized":"2026-03-09T14:01:01.021851+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:48:09.694923+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,6],"acting":[5,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.f","version":"47'3","reported_seq":43,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.478514+0000","last_change":"2026-03-09T14:00:50.119383+0000","last_active":"2026-03-09T14:00:57.478514+0000","last_peered":"2026-03-09T14:00:57.478514+0000","last_clean":"2026-03-09T14:00:57.478514+0000","last_became_active":"2026-03-09T14:00:50.119246+0000","last_became_peered":"2026-03-09T14:00:50.119246+0000","last_unstale":"2026-03-09T14:00:57.478514+0000","last_undegraded":"2026-03-09T14:00:57.478514+0000","last_fullsized":"2026-03-09T14:00:57.478514+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":3,"log_dups_size":0,"ondisk_log_size":3,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:54:45.511440+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":528,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":17,"num_read_kb":17,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.9","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.617072+0000","last_change":"2026-03-09T14:00:54.157391+0000","last_active":"2026-03-09T14:00:57.617072+0000","last_peered":"2026-03-09T14:00:57.617072+0000","last_clean":"2026-03-09T14:00:57.617072+0000","last_became_active":"2026-03-09T14:00:54.157042+0000","last_became_peered":"2026-03-09T14:00:54.157042+0000","last_unstale":"2026-03-09T14:00:57.617072+0000","last_undegraded":"2026-03-09T14:00:57.617072+0000","last_fullsized":"2026-03-09T14:00:57.617072+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:56:01.518933+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021822+0000","last_change":"2026-03-09T14:00:56.637208+0000","last_active":"2026-03-09T14:01:01.021822+0000","last_peered":"2026-03-09T14:01:01.021822+0000","last_clean":"2026-03-09T14:01:01.021822+0000","last_became_active":"2026-03-09T14:00:56.637101+0000","last_became_peered":"2026-03-09T14:00:56.637101+0000","last_unstale":"2026-03-09T14:01:01.021822+0000","last_undegraded":"2026-03-09T14:01:01.021822+0000","last_fullsized":"2026-03-09T14:01:01.021822+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:09:04.259304+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.10","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188522+0000","last_change":"2026-03-09T14:00:50.115631+0000","last_active":"2026-03-09T14:01:01.188522+0000","last_peered":"2026-03-09T14:01:01.188522+0000","last_clean":"2026-03-09T14:01:01.188522+0000","last_became_active":"2026-03-09T14:00:50.115411+0000","last_became_peered":"2026-03-09T14:00:50.115411+0000","last_unstale":"2026-03-09T14:01:01.188522+0000","last_undegraded":"2026-03-09T14:01:01.188522+0000","last_fullsized":"2026-03-09T14:01:01.188522+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:35:54.222544+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.17","version":"54'6","reported_seq":32,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188576+0000","last_change":"2026-03-09T14:00:52.160532+0000","last_active":"2026-03-09T14:01:01.188576+0000","last_peered":"2026-03-09T14:01:01.188576+0000","last_clean":"2026-03-09T14:01:01.188576+0000","last_became_active":"2026-03-09T14:00:52.159413+0000","last_became_peered":"2026-03-09T14:00:52.159413+0000","last_unstale":"2026-03-09T14:01:01.188576+0000","last_undegraded":"2026-03-09T14:01:01.188576+0000","last_fullsized":"2026-03-09T14:01:01.188576+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:07:07.603055+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021622+0000","last_change":"2026-03-09T14:00:54.134808+0000","last_active":"2026-03-09T14:01:01.021622+0000","last_peered":"2026-03-09T14:01:01.021622+0000","last_clean":"2026-03-09T14:01:01.021622+0000","last_became_active":"2026-03-09T14:00:54.134551+0000","last_became_peered":"2026-03-09T14:00:54.134551+0000","last_unstale":"2026-03-09T14:01:01.021622+0000","last_undegraded":"2026-03-09T14:01:01.021622+0000","last_fullsized":"2026-03-09T14:01:01.021622+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:21:26.040903+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.160181+0000","last_change":"2026-03-09T14:00:56.635796+0000","last_active":"2026-03-09T14:00:57.160181+0000","last_peered":"2026-03-09T14:00:57.160181+0000","last_clean":"2026-03-09T14:00:57.160181+0000","last_became_active":"2026-03-09T14:00:56.635587+0000","last_became_peered":"2026-03-09T14:00:56.635587+0000","last_unstale":"2026-03-09T14:00:57.160181+0000","last_undegraded":"2026-03-09T14:00:57.160181+0000","last_fullsized":"2026-03-09T14:00:57.160181+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:19:19.163679+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.16","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.749348+0000","last_change":"2026-03-09T14:00:52.127333+0000","last_active":"2026-03-09T14:00:57.749348+0000","last_peered":"2026-03-09T14:00:57.749348+0000","last_clean":"2026-03-09T14:00:57.749348+0000","last_became_active":"2026-03-09T14:00:52.127097+0000","last_became_peered":"2026-03-09T14:00:52.127097+0000","last_unstale":"2026-03-09T14:00:57.749348+0000","last_undegraded":"2026-03-09T14:00:57.749348+0000","last_fullsized":"2026-03-09T14:00:57.749348+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:47:11.041277+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,7],"acting":[0,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.11","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159581+0000","last_change":"2026-03-09T14:00:50.116922+0000","last_active":"2026-03-09T14:00:57.159581+0000","last_peered":"2026-03-09T14:00:57.159581+0000","last_clean":"2026-03-09T14:00:57.159581+0000","last_became_active":"2026-03-09T14:00:50.115220+0000","last_became_peered":"2026-03-09T14:00:50.115220+0000","last_unstale":"2026-03-09T14:00:57.159581+0000","last_undegraded":"2026-03-09T14:00:57.159581+0000","last_fullsized":"2026-03-09T14:00:57.159581+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:10:25.689629+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188677+0000","last_change":"2026-03-09T14:00:54.146358+0000","last_active":"2026-03-09T14:01:01.188677+0000","last_peered":"2026-03-09T14:01:01.188677+0000","last_clean":"2026-03-09T14:01:01.188677+0000","last_became_active":"2026-03-09T14:00:54.144879+0000","last_became_peered":"2026-03-09T14:00:54.144879+0000","last_unstale":"2026-03-09T14:01:01.188677+0000","last_undegraded":"2026-03-09T14:01:01.188677+0000","last_fullsized":"2026-03-09T14:01:01.188677+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:53:17.258641+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.035068+0000","last_change":"2026-03-09T14:00:56.153987+0000","last_active":"2026-03-09T14:01:01.035068+0000","last_peered":"2026-03-09T14:01:01.035068+0000","last_clean":"2026-03-09T14:01:01.035068+0000","last_became_active":"2026-03-09T14:00:56.153881+0000","last_became_peered":"2026-03-09T14:00:56.153881+0000","last_unstale":"2026-03-09T14:01:01.035068+0000","last_undegraded":"2026-03-09T14:01:01.035068+0000","last_fullsized":"2026-03-09T14:01:01.035068+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:58:43.420456+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.15","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021896+0000","last_change":"2026-03-09T14:00:52.132726+0000","last_active":"2026-03-09T14:01:01.021896+0000","last_peered":"2026-03-09T14:01:01.021896+0000","last_clean":"2026-03-09T14:01:01.021896+0000","last_became_active":"2026-03-09T14:00:52.132586+0000","last_became_peered":"2026-03-09T14:00:52.132586+0000","last_unstale":"2026-03-09T14:01:01.021896+0000","last_undegraded":"2026-03-09T14:01:01.021896+0000","last_fullsized":"2026-03-09T14:01:01.021896+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:40:50.662955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,3],"acting":[5,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.12","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137588+0000","last_change":"2026-03-09T14:00:50.117345+0000","last_active":"2026-03-09T14:00:57.137588+0000","last_peered":"2026-03-09T14:00:57.137588+0000","last_clean":"2026-03-09T14:00:57.137588+0000","last_became_active":"2026-03-09T14:00:50.116949+0000","last_became_peered":"2026-03-09T14:00:50.116949+0000","last_unstale":"2026-03-09T14:00:57.137588+0000","last_undegraded":"2026-03-09T14:00:57.137588+0000","last_fullsized":"2026-03-09T14:00:57.137588+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:58:28.202939+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"54'8","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188356+0000","last_change":"2026-03-09T14:00:54.151538+0000","last_active":"2026-03-09T14:01:01.188356+0000","last_peered":"2026-03-09T14:01:01.188356+0000","last_clean":"2026-03-09T14:01:01.188356+0000","last_became_active":"2026-03-09T14:00:54.151387+0000","last_became_peered":"2026-03-09T14:00:54.151387+0000","last_unstale":"2026-03-09T14:01:01.188356+0000","last_undegraded":"2026-03-09T14:01:01.188356+0000","last_fullsized":"2026-03-09T14:01:01.188356+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:29:45.507392+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190399+0000","last_change":"2026-03-09T14:00:56.149911+0000","last_active":"2026-03-09T14:01:01.190399+0000","last_peered":"2026-03-09T14:01:01.190399+0000","last_clean":"2026-03-09T14:01:01.190399+0000","last_became_active":"2026-03-09T14:00:56.148413+0000","last_became_peered":"2026-03-09T14:00:56.148413+0000","last_unstale":"2026-03-09T14:01:01.190399+0000","last_undegraded":"2026-03-09T14:01:01.190399+0000","last_fullsized":"2026-03-09T14:01:01.190399+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:45:40.538100+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.14","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188537+0000","last_change":"2026-03-09T14:00:52.160594+0000","last_active":"2026-03-09T14:01:01.188537+0000","last_peered":"2026-03-09T14:01:01.188537+0000","last_clean":"2026-03-09T14:01:01.188537+0000","last_became_active":"2026-03-09T14:00:52.159173+0000","last_became_peered":"2026-03-09T14:00:52.159173+0000","last_unstale":"2026-03-09T14:01:01.188537+0000","last_undegraded":"2026-03-09T14:01:01.188537+0000","last_fullsized":"2026-03-09T14:01:01.188537+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:14:28.480143+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.13","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159515+0000","last_change":"2026-03-09T14:00:50.120833+0000","last_active":"2026-03-09T14:00:57.159515+0000","last_peered":"2026-03-09T14:00:57.159515+0000","last_clean":"2026-03-09T14:00:57.159515+0000","last_became_active":"2026-03-09T14:00:50.120714+0000","last_became_peered":"2026-03-09T14:00:50.120714+0000","last_unstale":"2026-03-09T14:00:57.159515+0000","last_undegraded":"2026-03-09T14:00:57.159515+0000","last_fullsized":"2026-03-09T14:00:57.159515+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:18:34.913502+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.15","version":"54'8","reported_seq":30,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.022070+0000","last_change":"2026-03-09T14:00:54.135125+0000","last_active":"2026-03-09T14:01:01.022070+0000","last_peered":"2026-03-09T14:01:01.022070+0000","last_clean":"2026-03-09T14:01:01.022070+0000","last_became_active":"2026-03-09T14:00:54.135005+0000","last_became_peered":"2026-03-09T14:00:54.135005+0000","last_unstale":"2026-03-09T14:01:01.022070+0000","last_undegraded":"2026-03-09T14:01:01.022070+0000","last_fullsized":"2026-03-09T14:01:01.022070+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:06:42.911471+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137499+0000","last_change":"2026-03-09T14:00:56.154968+0000","last_active":"2026-03-09T14:00:57.137499+0000","last_peered":"2026-03-09T14:00:57.137499+0000","last_clean":"2026-03-09T14:00:57.137499+0000","last_became_active":"2026-03-09T14:00:56.154874+0000","last_became_peered":"2026-03-09T14:00:56.154874+0000","last_unstale":"2026-03-09T14:00:57.137499+0000","last_undegraded":"2026-03-09T14:00:57.137499+0000","last_fullsized":"2026-03-09T14:00:57.137499+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:40:04.605112+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.13","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190991+0000","last_change":"2026-03-09T14:00:52.164241+0000","last_active":"2026-03-09T14:01:01.190991+0000","last_peered":"2026-03-09T14:01:01.190991+0000","last_clean":"2026-03-09T14:01:01.190991+0000","last_became_active":"2026-03-09T14:00:52.164150+0000","last_became_peered":"2026-03-09T14:00:52.164150+0000","last_unstale":"2026-03-09T14:01:01.190991+0000","last_undegraded":"2026-03-09T14:01:01.190991+0000","last_fullsized":"2026-03-09T14:01:01.190991+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:41:50.296756+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.14","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190967+0000","last_change":"2026-03-09T14:00:50.113905+0000","last_active":"2026-03-09T14:01:01.190967+0000","last_peered":"2026-03-09T14:01:01.190967+0000","last_clean":"2026-03-09T14:01:01.190967+0000","last_became_active":"2026-03-09T14:00:50.113428+0000","last_became_peered":"2026-03-09T14:00:50.113428+0000","last_unstale":"2026-03-09T14:01:01.190967+0000","last_undegraded":"2026-03-09T14:01:01.190967+0000","last_fullsized":"2026-03-09T14:01:01.190967+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:26:12.423394+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125242+0000","last_change":"2026-03-09T14:00:54.131567+0000","last_active":"2026-03-09T14:00:57.125242+0000","last_peered":"2026-03-09T14:00:57.125242+0000","last_clean":"2026-03-09T14:00:57.125242+0000","last_became_active":"2026-03-09T14:00:54.130774+0000","last_became_peered":"2026-03-09T14:00:54.130774+0000","last_unstale":"2026-03-09T14:00:57.125242+0000","last_undegraded":"2026-03-09T14:00:57.125242+0000","last_fullsized":"2026-03-09T14:00:57.125242+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:40:11.888601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187596+0000","last_change":"2026-03-09T14:00:56.145531+0000","last_active":"2026-03-09T14:01:01.187596+0000","last_peered":"2026-03-09T14:01:01.187596+0000","last_clean":"2026-03-09T14:01:01.187596+0000","last_became_active":"2026-03-09T14:00:56.145416+0000","last_became_peered":"2026-03-09T14:00:56.145416+0000","last_unstale":"2026-03-09T14:01:01.187596+0000","last_undegraded":"2026-03-09T14:01:01.187596+0000","last_fullsized":"2026-03-09T14:01:01.187596+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:56:44.578058+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.12","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.920002+0000","last_change":"2026-03-09T14:00:52.123624+0000","last_active":"2026-03-09T14:00:57.920002+0000","last_peered":"2026-03-09T14:00:57.920002+0000","last_clean":"2026-03-09T14:00:57.920002+0000","last_became_active":"2026-03-09T14:00:52.121963+0000","last_became_peered":"2026-03-09T14:00:52.121963+0000","last_unstale":"2026-03-09T14:00:57.920002+0000","last_undegraded":"2026-03-09T14:00:57.920002+0000","last_fullsized":"2026-03-09T14:00:57.920002+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:14:31.493347+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.15","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159475+0000","last_change":"2026-03-09T14:00:50.118082+0000","last_active":"2026-03-09T14:00:57.159475+0000","last_peered":"2026-03-09T14:00:57.159475+0000","last_clean":"2026-03-09T14:00:57.159475+0000","last_became_active":"2026-03-09T14:00:50.117980+0000","last_became_peered":"2026-03-09T14:00:50.117980+0000","last_unstale":"2026-03-09T14:00:57.159475+0000","last_undegraded":"2026-03-09T14:00:57.159475+0000","last_fullsized":"2026-03-09T14:00:57.159475+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:47:57.554137+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188236+0000","last_change":"2026-03-09T14:00:54.136331+0000","last_active":"2026-03-09T14:01:01.188236+0000","last_peered":"2026-03-09T14:01:01.188236+0000","last_clean":"2026-03-09T14:01:01.188236+0000","last_became_active":"2026-03-09T14:00:54.136175+0000","last_became_peered":"2026-03-09T14:00:54.136175+0000","last_unstale":"2026-03-09T14:01:01.188236+0000","last_undegraded":"2026-03-09T14:01:01.188236+0000","last_fullsized":"2026-03-09T14:01:01.188236+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:08:42.121733+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137388+0000","last_change":"2026-03-09T14:00:56.143759+0000","last_active":"2026-03-09T14:00:57.137388+0000","last_peered":"2026-03-09T14:00:57.137388+0000","last_clean":"2026-03-09T14:00:57.137388+0000","last_became_active":"2026-03-09T14:00:56.143664+0000","last_became_peered":"2026-03-09T14:00:56.143664+0000","last_unstale":"2026-03-09T14:00:57.137388+0000","last_undegraded":"2026-03-09T14:00:57.137388+0000","last_fullsized":"2026-03-09T14:00:57.137388+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:46:55.622334+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.11","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188496+0000","last_change":"2026-03-09T14:00:52.161210+0000","last_active":"2026-03-09T14:01:01.188496+0000","last_peered":"2026-03-09T14:01:01.188496+0000","last_clean":"2026-03-09T14:01:01.188496+0000","last_became_active":"2026-03-09T14:00:52.159259+0000","last_became_peered":"2026-03-09T14:00:52.159259+0000","last_unstale":"2026-03-09T14:01:01.188496+0000","last_undegraded":"2026-03-09T14:01:01.188496+0000","last_fullsized":"2026-03-09T14:01:01.188496+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:08:13.829084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.16","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021508+0000","last_change":"2026-03-09T14:00:50.120914+0000","last_active":"2026-03-09T14:01:01.021508+0000","last_peered":"2026-03-09T14:01:01.021508+0000","last_clean":"2026-03-09T14:01:01.021508+0000","last_became_active":"2026-03-09T14:00:50.116418+0000","last_became_peered":"2026-03-09T14:00:50.116418+0000","last_unstale":"2026-03-09T14:01:01.021508+0000","last_undegraded":"2026-03-09T14:01:01.021508+0000","last_fullsized":"2026-03-09T14:01:01.021508+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:15:21.013213+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159745+0000","last_change":"2026-03-09T14:00:54.157275+0000","last_active":"2026-03-09T14:00:57.159745+0000","last_peered":"2026-03-09T14:00:57.159745+0000","last_clean":"2026-03-09T14:00:57.159745+0000","last_became_active":"2026-03-09T14:00:54.156730+0000","last_became_peered":"2026-03-09T14:00:54.156730+0000","last_unstale":"2026-03-09T14:00:57.159745+0000","last_undegraded":"2026-03-09T14:00:57.159745+0000","last_fullsized":"2026-03-09T14:00:57.159745+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:43:37.617001+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188472+0000","last_change":"2026-03-09T14:00:56.637443+0000","last_active":"2026-03-09T14:01:01.188472+0000","last_peered":"2026-03-09T14:01:01.188472+0000","last_clean":"2026-03-09T14:01:01.188472+0000","last_became_active":"2026-03-09T14:00:56.635343+0000","last_became_peered":"2026-03-09T14:00:56.635343+0000","last_unstale":"2026-03-09T14:01:01.188472+0000","last_undegraded":"2026-03-09T14:01:01.188472+0000","last_fullsized":"2026-03-09T14:01:01.188472+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:29:49.801540+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.10","version":"54'4","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188313+0000","last_change":"2026-03-09T14:00:52.142301+0000","last_active":"2026-03-09T14:01:01.188313+0000","last_peered":"2026-03-09T14:01:01.188313+0000","last_clean":"2026-03-09T14:01:01.188313+0000","last_became_active":"2026-03-09T14:00:52.142117+0000","last_became_peered":"2026-03-09T14:00:52.142117+0000","last_unstale":"2026-03-09T14:01:01.188313+0000","last_undegraded":"2026-03-09T14:01:01.188313+0000","last_fullsized":"2026-03-09T14:01:01.188313+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:52:04.977371+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,6],"acting":[3,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"47'1","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.366979+0000","last_change":"2026-03-09T14:00:50.124617+0000","last_active":"2026-03-09T14:00:57.366979+0000","last_peered":"2026-03-09T14:00:57.366979+0000","last_clean":"2026-03-09T14:00:57.366979+0000","last_became_active":"2026-03-09T14:00:50.124352+0000","last_became_peered":"2026-03-09T14:00:50.124352+0000","last_unstale":"2026-03-09T14:00:57.366979+0000","last_undegraded":"2026-03-09T14:00:57.366979+0000","last_fullsized":"2026-03-09T14:00:57.366979+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:24:32.349341+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188850+0000","last_change":"2026-03-09T14:00:54.152193+0000","last_active":"2026-03-09T14:01:01.188850+0000","last_peered":"2026-03-09T14:01:01.188850+0000","last_clean":"2026-03-09T14:01:01.188850+0000","last_became_active":"2026-03-09T14:00:54.151950+0000","last_became_peered":"2026-03-09T14:00:54.151950+0000","last_unstale":"2026-03-09T14:01:01.188850+0000","last_undegraded":"2026-03-09T14:01:01.188850+0000","last_fullsized":"2026-03-09T14:01:01.188850+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:37:25.592969+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"54'1","reported_seq":14,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159313+0000","last_change":"2026-03-09T14:00:56.160331+0000","last_active":"2026-03-09T14:00:57.159313+0000","last_peered":"2026-03-09T14:00:57.159313+0000","last_clean":"2026-03-09T14:00:57.159313+0000","last_became_active":"2026-03-09T14:00:56.160148+0000","last_became_peered":"2026-03-09T14:00:56.160148+0000","last_unstale":"2026-03-09T14:00:57.159313+0000","last_undegraded":"2026-03-09T14:00:57.159313+0000","last_fullsized":"2026-03-09T14:00:57.159313+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:24:23.174522+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.126135+0000","last_change":"2026-03-09T14:00:56.130959+0000","last_active":"2026-03-09T14:00:57.126135+0000","last_peered":"2026-03-09T14:00:57.126135+0000","last_clean":"2026-03-09T14:00:57.126135+0000","last_became_active":"2026-03-09T14:00:56.130810+0000","last_became_peered":"2026-03-09T14:00:56.130810+0000","last_unstale":"2026-03-09T14:00:57.126135+0000","last_undegraded":"2026-03-09T14:00:57.126135+0000","last_fullsized":"2026-03-09T14:00:57.126135+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:11:56.350855+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188177+0000","last_change":"2026-03-09T14:00:50.135782+0000","last_active":"2026-03-09T14:01:01.188177+0000","last_peered":"2026-03-09T14:01:01.188177+0000","last_clean":"2026-03-09T14:01:01.188177+0000","last_became_active":"2026-03-09T14:00:50.135504+0000","last_became_peered":"2026-03-09T14:00:50.135504+0000","last_unstale":"2026-03-09T14:01:01.188177+0000","last_undegraded":"2026-03-09T14:01:01.188177+0000","last_fullsized":"2026-03-09T14:01:01.188177+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:53:25.156930+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.1f","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188668+0000","last_change":"2026-03-09T14:00:52.146674+0000","last_active":"2026-03-09T14:01:01.188668+0000","last_peered":"2026-03-09T14:01:01.188668+0000","last_clean":"2026-03-09T14:01:01.188668+0000","last_became_active":"2026-03-09T14:00:52.146552+0000","last_became_peered":"2026-03-09T14:00:52.146552+0000","last_unstale":"2026-03-09T14:01:01.188668+0000","last_undegraded":"2026-03-09T14:01:01.188668+0000","last_fullsized":"2026-03-09T14:01:01.188668+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:42:55.294332+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,1],"acting":[6,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137865+0000","last_change":"2026-03-09T14:00:54.136626+0000","last_active":"2026-03-09T14:00:57.137865+0000","last_peered":"2026-03-09T14:00:57.137865+0000","last_clean":"2026-03-09T14:00:57.137865+0000","last_became_active":"2026-03-09T14:00:54.135058+0000","last_became_peered":"2026-03-09T14:00:54.135058+0000","last_unstale":"2026-03-09T14:00:57.137865+0000","last_undegraded":"2026-03-09T14:00:57.137865+0000","last_fullsized":"2026-03-09T14:00:57.137865+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:25:37.631482+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":64,"ondisk_log_size":64,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":698,"num_read_kb":455,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":393,"ondisk_log_size":393,"up":96,"acting":96,"num_store_stats":8},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":24,"num_read_kb":24,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":8,"num_read_kb":3,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":2314240,"data_stored":2296400,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":7}],"osd_stats":[{"osd":7,"up_from":43,"seq":184683593733,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27828,"kb_used_data":996,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939596,"statfs":{"total":21470642176,"available":21442146304,"internally_reserved":0,"allocated":1019904,"data_stored":666574,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":38,"seq":163208757255,"num_pgs":43,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27800,"kb_used_data":968,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939624,"statfs":{"total":21470642176,"available":21442174976,"internally_reserved":0,"allocated":991232,"data_stored":665040,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":13,"apply_latency_ms":13,"commit_latency_ns":13000000,"apply_latency_ns":13000000},"alerts":[]},{"osd":5,"up_from":33,"seq":141733920777,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27360,"kb_used_data":524,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940064,"statfs":{"total":21470642176,"available":21442625536,"internally_reserved":0,"allocated":536576,"data_stored":207112,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":28,"seq":120259084299,"num_pgs":58,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27404,"kb_used_data":564,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940020,"statfs":{"total":21470642176,"available":21442580480,"internally_reserved":0,"allocated":577536,"data_stored":212964,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":29,"apply_latency_ms":29,"commit_latency_ns":29000000,"apply_latency_ns":29000000},"alerts":[]},{"osd":3,"up_from":23,"seq":98784247821,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27404,"kb_used_data":568,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940020,"statfs":{"total":21470642176,"available":21442580480,"internally_reserved":0,"allocated":581632,"data_stored":213794,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":14,"apply_latency_ms":14,"commit_latency_ns":14000000,"apply_latency_ns":14000000},"alerts":[]},{"osd":2,"up_from":16,"seq":68719476751,"num_pgs":36,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27364,"kb_used_data":528,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940060,"statfs":{"total":21470642176,"available":21442621440,"internally_reserved":0,"allocated":540672,"data_stored":212071,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":20,"apply_latency_ms":20,"commit_latency_ns":20000000,"apply_latency_ns":20000000},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607569,"num_pgs":57,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27416,"kb_used_data":580,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940008,"statfs":{"total":21470642176,"available":21442568192,"internally_reserved":0,"allocated":593920,"data_stored":207545,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738387,"num_pgs":46,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27832,"kb_used_data":1000,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939592,"statfs":{"total":21470642176,"available":21442142208,"internally_reserved":0,"allocated":1024000,"data_stored":667263,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":1567,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1039,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":620,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":993,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":528,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":90112,"data_stored":2338,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":32768,"data_stored":798,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1898,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":53248,"data_stored":1474,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":990,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":1034,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1254,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T14:01:04.729 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph pg dump --format=json 2026-03-09T14:01:04.899 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:04.991 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:04 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:04.664Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T14:01:05.120 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:01:05.125 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-09T14:01:05.179 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":109,"stamp":"2026-03-09T14:01:03.261767+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":776,"num_read_kb":519,"num_write":493,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":505,"ondisk_log_size":505,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":220408,"kb_used_data":5728,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518984,"statfs":{"total":171765137408,"available":171539439616,"internally_reserved":0,"allocated":5865472,"data_stored":3052363,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12711,"internal_metadata":219663961},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":76,"apply_latency_ms":76,"commit_latency_ns":76000000,"apply_latency_ns":76000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":4490,"num_objects":186,"num_object_clones":0,"num_object_copies":558,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":186,"num_whiteouts":0,"num_read":713,"num_read_kb":465,"num_write":423,"num_write_kb":37,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"7.142344"},"pg_stats":[{"pgid":"3.1f","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137616+0000","last_change":"2026-03-09T14:00:50.124706+0000","last_active":"2026-03-09T14:00:57.137616+0000","last_peered":"2026-03-09T14:00:57.137616+0000","last_clean":"2026-03-09T14:00:57.137616+0000","last_became_active":"2026-03-09T14:00:50.124479+0000","last_became_peered":"2026-03-09T14:00:50.124479+0000","last_unstale":"2026-03-09T14:00:57.137616+0000","last_undegraded":"2026-03-09T14:00:57.137616+0000","last_fullsized":"2026-03-09T14:00:57.137616+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:47:57.185062+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.18","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191375+0000","last_change":"2026-03-09T14:00:52.145594+0000","last_active":"2026-03-09T14:01:01.191375+0000","last_peered":"2026-03-09T14:01:01.191375+0000","last_clean":"2026-03-09T14:01:01.191375+0000","last_became_active":"2026-03-09T14:00:52.145336+0000","last_became_peered":"2026-03-09T14:00:52.145336+0000","last_unstale":"2026-03-09T14:01:01.191375+0000","last_undegraded":"2026-03-09T14:01:01.191375+0000","last_fullsized":"2026-03-09T14:01:01.191375+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:13:18.777016+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125346+0000","last_change":"2026-03-09T14:00:54.135389+0000","last_active":"2026-03-09T14:00:57.125346+0000","last_peered":"2026-03-09T14:00:57.125346+0000","last_clean":"2026-03-09T14:00:57.125346+0000","last_became_active":"2026-03-09T14:00:54.135294+0000","last_became_peered":"2026-03-09T14:00:54.135294+0000","last_unstale":"2026-03-09T14:00:57.125346+0000","last_undegraded":"2026-03-09T14:00:57.125346+0000","last_fullsized":"2026-03-09T14:00:57.125346+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:15:00.725084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191375+0000","last_change":"2026-03-09T14:00:56.141611+0000","last_active":"2026-03-09T14:01:01.191375+0000","last_peered":"2026-03-09T14:01:01.191375+0000","last_clean":"2026-03-09T14:01:01.191375+0000","last_became_active":"2026-03-09T14:00:56.141165+0000","last_became_peered":"2026-03-09T14:00:56.141165+0000","last_unstale":"2026-03-09T14:01:01.191375+0000","last_undegraded":"2026-03-09T14:01:01.191375+0000","last_fullsized":"2026-03-09T14:01:01.191375+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:14:30.173549+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1b","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187885+0000","last_change":"2026-03-09T14:00:56.637703+0000","last_active":"2026-03-09T14:01:01.187885+0000","last_peered":"2026-03-09T14:01:01.187885+0000","last_clean":"2026-03-09T14:01:01.187885+0000","last_became_active":"2026-03-09T14:00:56.636353+0000","last_became_peered":"2026-03-09T14:00:56.636353+0000","last_unstale":"2026-03-09T14:01:01.187885+0000","last_undegraded":"2026-03-09T14:01:01.187885+0000","last_fullsized":"2026-03-09T14:01:01.187885+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:34:05.734008+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1e","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187908+0000","last_change":"2026-03-09T14:00:50.135838+0000","last_active":"2026-03-09T14:01:01.187908+0000","last_peered":"2026-03-09T14:01:01.187908+0000","last_clean":"2026-03-09T14:01:01.187908+0000","last_became_active":"2026-03-09T14:00:50.126256+0000","last_became_peered":"2026-03-09T14:00:50.126256+0000","last_unstale":"2026-03-09T14:01:01.187908+0000","last_undegraded":"2026-03-09T14:01:01.187908+0000","last_fullsized":"2026-03-09T14:01:01.187908+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:05:43.294214+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.19","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187932+0000","last_change":"2026-03-09T14:00:52.125828+0000","last_active":"2026-03-09T14:01:01.187932+0000","last_peered":"2026-03-09T14:01:01.187932+0000","last_clean":"2026-03-09T14:01:01.187932+0000","last_became_active":"2026-03-09T14:00:52.125688+0000","last_became_peered":"2026-03-09T14:00:52.125688+0000","last_unstale":"2026-03-09T14:01:01.187932+0000","last_undegraded":"2026-03-09T14:01:01.187932+0000","last_fullsized":"2026-03-09T14:01:01.187932+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:03:39.962511+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2,0],"acting":[3,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190465+0000","last_change":"2026-03-09T14:00:54.159732+0000","last_active":"2026-03-09T14:01:01.190465+0000","last_peered":"2026-03-09T14:01:01.190465+0000","last_clean":"2026-03-09T14:01:01.190465+0000","last_became_active":"2026-03-09T14:00:54.159573+0000","last_became_peered":"2026-03-09T14:00:54.159573+0000","last_unstale":"2026-03-09T14:01:01.190465+0000","last_undegraded":"2026-03-09T14:01:01.190465+0000","last_fullsized":"2026-03-09T14:01:01.190465+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:48:38.542882+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1d","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021169+0000","last_change":"2026-03-09T14:00:50.115103+0000","last_active":"2026-03-09T14:01:01.021169+0000","last_peered":"2026-03-09T14:01:01.021169+0000","last_clean":"2026-03-09T14:01:01.021169+0000","last_became_active":"2026-03-09T14:00:50.112433+0000","last_became_peered":"2026-03-09T14:00:50.112433+0000","last_unstale":"2026-03-09T14:01:01.021169+0000","last_undegraded":"2026-03-09T14:01:01.021169+0000","last_fullsized":"2026-03-09T14:01:01.021169+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:31:15.423838+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1a","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190610+0000","last_change":"2026-03-09T14:00:52.144870+0000","last_active":"2026-03-09T14:01:01.190610+0000","last_peered":"2026-03-09T14:01:01.190610+0000","last_clean":"2026-03-09T14:01:01.190610+0000","last_became_active":"2026-03-09T14:00:52.144703+0000","last_became_peered":"2026-03-09T14:00:52.144703+0000","last_unstale":"2026-03-09T14:01:01.190610+0000","last_undegraded":"2026-03-09T14:01:01.190610+0000","last_fullsized":"2026-03-09T14:01:01.190610+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:26:39.769449+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,0],"acting":[4,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021207+0000","last_change":"2026-03-09T14:00:54.135182+0000","last_active":"2026-03-09T14:01:01.021207+0000","last_peered":"2026-03-09T14:01:01.021207+0000","last_clean":"2026-03-09T14:01:01.021207+0000","last_became_active":"2026-03-09T14:00:54.135052+0000","last_became_peered":"2026-03-09T14:00:54.135052+0000","last_unstale":"2026-03-09T14:01:01.021207+0000","last_undegraded":"2026-03-09T14:01:01.021207+0000","last_fullsized":"2026-03-09T14:01:01.021207+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:34:53.793325+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137836+0000","last_change":"2026-03-09T14:00:56.152695+0000","last_active":"2026-03-09T14:00:57.137836+0000","last_peered":"2026-03-09T14:00:57.137836+0000","last_clean":"2026-03-09T14:00:57.137836+0000","last_became_active":"2026-03-09T14:00:56.152570+0000","last_became_peered":"2026-03-09T14:00:56.152570+0000","last_unstale":"2026-03-09T14:00:57.137836+0000","last_undegraded":"2026-03-09T14:00:57.137836+0000","last_fullsized":"2026-03-09T14:00:57.137836+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:20:44.317207+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021432+0000","last_change":"2026-03-09T14:00:50.115695+0000","last_active":"2026-03-09T14:01:01.021432+0000","last_peered":"2026-03-09T14:01:01.021432+0000","last_clean":"2026-03-09T14:01:01.021432+0000","last_became_active":"2026-03-09T14:00:50.112569+0000","last_became_peered":"2026-03-09T14:00:50.112569+0000","last_unstale":"2026-03-09T14:01:01.021432+0000","last_undegraded":"2026-03-09T14:01:01.021432+0000","last_fullsized":"2026-03-09T14:01:01.021432+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:26:53.019912+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.1b","version":"54'5","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190682+0000","last_change":"2026-03-09T14:00:52.165696+0000","last_active":"2026-03-09T14:01:01.190682+0000","last_peered":"2026-03-09T14:01:01.190682+0000","last_clean":"2026-03-09T14:01:01.190682+0000","last_became_active":"2026-03-09T14:00:52.165615+0000","last_became_peered":"2026-03-09T14:00:52.165615+0000","last_unstale":"2026-03-09T14:01:01.190682+0000","last_undegraded":"2026-03-09T14:01:01.190682+0000","last_fullsized":"2026-03-09T14:01:01.190682+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:46:05.967459+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,1],"acting":[4,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159452+0000","last_change":"2026-03-09T14:00:54.157187+0000","last_active":"2026-03-09T14:00:57.159452+0000","last_peered":"2026-03-09T14:00:57.159452+0000","last_clean":"2026-03-09T14:00:57.159452+0000","last_became_active":"2026-03-09T14:00:54.156697+0000","last_became_peered":"2026-03-09T14:00:54.156697+0000","last_unstale":"2026-03-09T14:00:57.159452+0000","last_undegraded":"2026-03-09T14:00:57.159452+0000","last_fullsized":"2026-03-09T14:00:57.159452+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:57:45.992426+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021402+0000","last_change":"2026-03-09T14:00:56.151137+0000","last_active":"2026-03-09T14:01:01.021402+0000","last_peered":"2026-03-09T14:01:01.021402+0000","last_clean":"2026-03-09T14:01:01.021402+0000","last_became_active":"2026-03-09T14:00:56.150968+0000","last_became_peered":"2026-03-09T14:00:56.150968+0000","last_unstale":"2026-03-09T14:01:01.021402+0000","last_undegraded":"2026-03-09T14:01:01.021402+0000","last_fullsized":"2026-03-09T14:01:01.021402+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:48:04.441308+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191299+0000","last_change":"2026-03-09T14:00:56.634838+0000","last_active":"2026-03-09T14:01:01.191299+0000","last_peered":"2026-03-09T14:01:01.191299+0000","last_clean":"2026-03-09T14:01:01.191299+0000","last_became_active":"2026-03-09T14:00:56.634720+0000","last_became_peered":"2026-03-09T14:00:56.634720+0000","last_unstale":"2026-03-09T14:01:01.191299+0000","last_undegraded":"2026-03-09T14:01:01.191299+0000","last_fullsized":"2026-03-09T14:01:01.191299+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:17:18.798862+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.1b","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137674+0000","last_change":"2026-03-09T14:00:50.117424+0000","last_active":"2026-03-09T14:00:57.137674+0000","last_peered":"2026-03-09T14:00:57.137674+0000","last_clean":"2026-03-09T14:00:57.137674+0000","last_became_active":"2026-03-09T14:00:50.117125+0000","last_became_peered":"2026-03-09T14:00:50.117125+0000","last_unstale":"2026-03-09T14:00:57.137674+0000","last_undegraded":"2026-03-09T14:00:57.137674+0000","last_fullsized":"2026-03-09T14:00:57.137674+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:49:23.702200+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.1c","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034393+0000","last_change":"2026-03-09T14:00:52.126262+0000","last_active":"2026-03-09T14:01:01.034393+0000","last_peered":"2026-03-09T14:01:01.034393+0000","last_clean":"2026-03-09T14:01:01.034393+0000","last_became_active":"2026-03-09T14:00:52.126157+0000","last_became_peered":"2026-03-09T14:00:52.126157+0000","last_unstale":"2026-03-09T14:01:01.034393+0000","last_undegraded":"2026-03-09T14:01:01.034393+0000","last_fullsized":"2026-03-09T14:01:01.034393+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:54:31.188950+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,3],"acting":[2,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125515+0000","last_change":"2026-03-09T14:00:54.152260+0000","last_active":"2026-03-09T14:00:57.125515+0000","last_peered":"2026-03-09T14:00:57.125515+0000","last_clean":"2026-03-09T14:00:57.125515+0000","last_became_active":"2026-03-09T14:00:54.152155+0000","last_became_peered":"2026-03-09T14:00:54.152155+0000","last_unstale":"2026-03-09T14:00:57.125515+0000","last_undegraded":"2026-03-09T14:00:57.125515+0000","last_fullsized":"2026-03-09T14:00:57.125515+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:11:31.573364+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188612+0000","last_change":"2026-03-09T14:00:56.637905+0000","last_active":"2026-03-09T14:01:01.188612+0000","last_peered":"2026-03-09T14:01:01.188612+0000","last_clean":"2026-03-09T14:01:01.188612+0000","last_became_active":"2026-03-09T14:00:56.636642+0000","last_became_peered":"2026-03-09T14:00:56.636642+0000","last_unstale":"2026-03-09T14:01:01.188612+0000","last_undegraded":"2026-03-09T14:01:01.188612+0000","last_fullsized":"2026-03-09T14:01:01.188612+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:21:33.251755+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190941+0000","last_change":"2026-03-09T14:00:50.131269+0000","last_active":"2026-03-09T14:01:01.190941+0000","last_peered":"2026-03-09T14:01:01.190941+0000","last_clean":"2026-03-09T14:01:01.190941+0000","last_became_active":"2026-03-09T14:00:50.131165+0000","last_became_peered":"2026-03-09T14:00:50.131165+0000","last_unstale":"2026-03-09T14:01:01.190941+0000","last_undegraded":"2026-03-09T14:01:01.190941+0000","last_fullsized":"2026-03-09T14:01:01.190941+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:32:39.823468+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1d","version":"54'12","reported_seq":46,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188652+0000","last_change":"2026-03-09T14:00:52.132675+0000","last_active":"2026-03-09T14:01:01.188652+0000","last_peered":"2026-03-09T14:01:01.188652+0000","last_clean":"2026-03-09T14:01:01.188652+0000","last_became_active":"2026-03-09T14:00:52.132573+0000","last_became_peered":"2026-03-09T14:00:52.132573+0000","last_unstale":"2026-03-09T14:01:01.188652+0000","last_undegraded":"2026-03-09T14:01:01.188652+0000","last_fullsized":"2026-03-09T14:01:01.188652+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:58:59.467750+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190831+0000","last_change":"2026-03-09T14:00:54.141172+0000","last_active":"2026-03-09T14:01:01.190831+0000","last_peered":"2026-03-09T14:01:01.190831+0000","last_clean":"2026-03-09T14:01:01.190831+0000","last_became_active":"2026-03-09T14:00:54.141019+0000","last_became_peered":"2026-03-09T14:00:54.141019+0000","last_unstale":"2026-03-09T14:01:01.190831+0000","last_undegraded":"2026-03-09T14:01:01.190831+0000","last_fullsized":"2026-03-09T14:01:01.190831+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:23:31.004418+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.1c","version":"54'1","reported_seq":14,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.160130+0000","last_change":"2026-03-09T14:00:56.154429+0000","last_active":"2026-03-09T14:00:57.160130+0000","last_peered":"2026-03-09T14:00:57.160130+0000","last_clean":"2026-03-09T14:00:57.160130+0000","last_became_active":"2026-03-09T14:00:56.154062+0000","last_became_peered":"2026-03-09T14:00:56.154062+0000","last_unstale":"2026-03-09T14:00:57.160130+0000","last_undegraded":"2026-03-09T14:00:57.160130+0000","last_fullsized":"2026-03-09T14:00:57.160130+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:00:25.612543+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"47'1","reported_seq":26,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125761+0000","last_change":"2026-03-09T14:00:50.120949+0000","last_active":"2026-03-09T14:00:57.125761+0000","last_peered":"2026-03-09T14:00:57.125761+0000","last_clean":"2026-03-09T14:00:57.125761+0000","last_became_active":"2026-03-09T14:00:50.120823+0000","last_became_peered":"2026-03-09T14:00:50.120823+0000","last_unstale":"2026-03-09T14:00:57.125761+0000","last_undegraded":"2026-03-09T14:00:57.125761+0000","last_fullsized":"2026-03-09T14:00:57.125761+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:12:03.943354+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.1e","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.069929+0000","last_change":"2026-03-09T14:00:52.127212+0000","last_active":"2026-03-09T14:00:58.069929+0000","last_peered":"2026-03-09T14:00:58.069929+0000","last_clean":"2026-03-09T14:00:58.069929+0000","last_became_active":"2026-03-09T14:00:52.127102+0000","last_became_peered":"2026-03-09T14:00:52.127102+0000","last_unstale":"2026-03-09T14:00:58.069929+0000","last_undegraded":"2026-03-09T14:00:58.069929+0000","last_fullsized":"2026-03-09T14:00:58.069929+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:37:17.757264+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1f","version":"54'8","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.189088+0000","last_change":"2026-03-09T14:00:54.152260+0000","last_active":"2026-03-09T14:01:01.189088+0000","last_peered":"2026-03-09T14:01:01.189088+0000","last_clean":"2026-03-09T14:01:01.189088+0000","last_became_active":"2026-03-09T14:00:54.152101+0000","last_became_peered":"2026-03-09T14:00:54.152101+0000","last_unstale":"2026-03-09T14:01:01.189088+0000","last_undegraded":"2026-03-09T14:01:01.189088+0000","last_fullsized":"2026-03-09T14:01:01.189088+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:21:46.931801+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.f","version":"54'15","reported_seq":46,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.100259+0000","last_change":"2026-03-09T14:00:52.166393+0000","last_active":"2026-03-09T14:00:58.100259+0000","last_peered":"2026-03-09T14:00:58.100259+0000","last_clean":"2026-03-09T14:00:58.100259+0000","last_became_active":"2026-03-09T14:00:52.166001+0000","last_became_peered":"2026-03-09T14:00:52.166001+0000","last_unstale":"2026-03-09T14:00:58.100259+0000","last_undegraded":"2026-03-09T14:00:58.100259+0000","last_fullsized":"2026-03-09T14:00:58.100259+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:28:23.678927+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.8","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188056+0000","last_change":"2026-03-09T14:00:50.135895+0000","last_active":"2026-03-09T14:01:01.188056+0000","last_peered":"2026-03-09T14:01:01.188056+0000","last_clean":"2026-03-09T14:01:01.188056+0000","last_became_active":"2026-03-09T14:00:50.135528+0000","last_became_peered":"2026-03-09T14:00:50.135528+0000","last_unstale":"2026-03-09T14:01:01.188056+0000","last_undegraded":"2026-03-09T14:01:01.188056+0000","last_fullsized":"2026-03-09T14:01:01.188056+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:07:58.161997+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.e","version":"54'8","reported_seq":30,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191393+0000","last_change":"2026-03-09T14:00:54.141114+0000","last_active":"2026-03-09T14:01:01.191393+0000","last_peered":"2026-03-09T14:01:01.191393+0000","last_clean":"2026-03-09T14:01:01.191393+0000","last_became_active":"2026-03-09T14:00:54.140873+0000","last_became_peered":"2026-03-09T14:00:54.140873+0000","last_unstale":"2026-03-09T14:01:01.191393+0000","last_undegraded":"2026-03-09T14:01:01.191393+0000","last_fullsized":"2026-03-09T14:01:01.191393+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:00:00.630776+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.022152+0000","last_change":"2026-03-09T14:00:56.134867+0000","last_active":"2026-03-09T14:01:01.022152+0000","last_peered":"2026-03-09T14:01:01.022152+0000","last_clean":"2026-03-09T14:01:01.022152+0000","last_became_active":"2026-03-09T14:00:56.134770+0000","last_became_peered":"2026-03-09T14:00:56.134770+0000","last_unstale":"2026-03-09T14:01:01.022152+0000","last_undegraded":"2026-03-09T14:01:01.022152+0000","last_fullsized":"2026-03-09T14:01:01.022152+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:43:00.327917+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.0","version":"54'18","reported_seq":55,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188152+0000","last_change":"2026-03-09T14:00:52.160966+0000","last_active":"2026-03-09T14:01:01.188152+0000","last_peered":"2026-03-09T14:01:01.188152+0000","last_clean":"2026-03-09T14:01:01.188152+0000","last_became_active":"2026-03-09T14:00:52.160096+0000","last_became_peered":"2026-03-09T14:00:52.160096+0000","last_unstale":"2026-03-09T14:01:01.188152+0000","last_undegraded":"2026-03-09T14:01:01.188152+0000","last_fullsized":"2026-03-09T14:01:01.188152+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:14:28.631322+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.7","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188125+0000","last_change":"2026-03-09T14:00:50.123500+0000","last_active":"2026-03-09T14:01:01.188125+0000","last_peered":"2026-03-09T14:01:01.188125+0000","last_clean":"2026-03-09T14:01:01.188125+0000","last_became_active":"2026-03-09T14:00:50.123416+0000","last_became_peered":"2026-03-09T14:00:50.123416+0000","last_unstale":"2026-03-09T14:01:01.188125+0000","last_undegraded":"2026-03-09T14:01:01.188125+0000","last_fullsized":"2026-03-09T14:01:01.188125+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:53:53.532715+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190511+0000","last_change":"2026-03-09T14:00:54.155414+0000","last_active":"2026-03-09T14:01:01.190511+0000","last_peered":"2026-03-09T14:01:01.190511+0000","last_clean":"2026-03-09T14:01:01.190511+0000","last_became_active":"2026-03-09T14:00:54.154137+0000","last_became_peered":"2026-03-09T14:00:54.154137+0000","last_unstale":"2026-03-09T14:01:01.190511+0000","last_undegraded":"2026-03-09T14:01:01.190511+0000","last_fullsized":"2026-03-09T14:01:01.190511+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:20:51.895356+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190527+0000","last_change":"2026-03-09T14:00:56.149661+0000","last_active":"2026-03-09T14:01:01.190527+0000","last_peered":"2026-03-09T14:01:01.190527+0000","last_clean":"2026-03-09T14:01:01.190527+0000","last_became_active":"2026-03-09T14:00:56.149575+0000","last_became_peered":"2026-03-09T14:00:56.149575+0000","last_unstale":"2026-03-09T14:01:01.190527+0000","last_undegraded":"2026-03-09T14:01:01.190527+0000","last_fullsized":"2026-03-09T14:01:01.190527+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:20:47.099958+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.1","version":"54'14","reported_seq":44,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191342+0000","last_change":"2026-03-09T14:00:52.148403+0000","last_active":"2026-03-09T14:01:01.191342+0000","last_peered":"2026-03-09T14:01:01.191342+0000","last_clean":"2026-03-09T14:01:01.191342+0000","last_became_active":"2026-03-09T14:00:52.148224+0000","last_became_peered":"2026-03-09T14:00:52.148224+0000","last_unstale":"2026-03-09T14:01:01.191342+0000","last_undegraded":"2026-03-09T14:01:01.191342+0000","last_fullsized":"2026-03-09T14:01:01.191342+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:30:05.259006+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.6","version":"47'1","reported_seq":26,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137602+0000","last_change":"2026-03-09T14:00:50.134015+0000","last_active":"2026-03-09T14:00:57.137602+0000","last_peered":"2026-03-09T14:00:57.137602+0000","last_clean":"2026-03-09T14:00:57.137602+0000","last_became_active":"2026-03-09T14:00:50.133924+0000","last_became_peered":"2026-03-09T14:00:50.133924+0000","last_unstale":"2026-03-09T14:00:57.137602+0000","last_undegraded":"2026-03-09T14:00:57.137602+0000","last_fullsized":"2026-03-09T14:00:57.137602+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:46:25.566149+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.0","version":"54'8","reported_seq":30,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187643+0000","last_change":"2026-03-09T14:00:54.141114+0000","last_active":"2026-03-09T14:01:01.187643+0000","last_peered":"2026-03-09T14:01:01.187643+0000","last_clean":"2026-03-09T14:01:01.187643+0000","last_became_active":"2026-03-09T14:00:54.140736+0000","last_became_peered":"2026-03-09T14:00:54.140736+0000","last_unstale":"2026-03-09T14:01:01.187643+0000","last_undegraded":"2026-03-09T14:01:01.187643+0000","last_fullsized":"2026-03-09T14:01:01.187643+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:34:35.726416+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159258+0000","last_change":"2026-03-09T14:00:56.636175+0000","last_active":"2026-03-09T14:00:57.159258+0000","last_peered":"2026-03-09T14:00:57.159258+0000","last_clean":"2026-03-09T14:00:57.159258+0000","last_became_active":"2026-03-09T14:00:56.636075+0000","last_became_peered":"2026-03-09T14:00:56.636075+0000","last_unstale":"2026-03-09T14:00:57.159258+0000","last_undegraded":"2026-03-09T14:00:57.159258+0000","last_fullsized":"2026-03-09T14:00:57.159258+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:52:58.410103+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.2","version":"54'10","reported_seq":36,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.052022+0000","last_change":"2026-03-09T14:00:52.166468+0000","last_active":"2026-03-09T14:00:58.052022+0000","last_peered":"2026-03-09T14:00:58.052022+0000","last_clean":"2026-03-09T14:00:58.052022+0000","last_became_active":"2026-03-09T14:00:52.166126+0000","last_became_peered":"2026-03-09T14:00:52.166126+0000","last_unstale":"2026-03-09T14:00:58.052022+0000","last_undegraded":"2026-03-09T14:00:58.052022+0000","last_fullsized":"2026-03-09T14:00:58.052022+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:09:52.546608+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021665+0000","last_change":"2026-03-09T14:00:50.123135+0000","last_active":"2026-03-09T14:01:01.021665+0000","last_peered":"2026-03-09T14:01:01.021665+0000","last_clean":"2026-03-09T14:01:01.021665+0000","last_became_active":"2026-03-09T14:00:50.123018+0000","last_became_peered":"2026-03-09T14:00:50.123018+0000","last_unstale":"2026-03-09T14:01:01.021665+0000","last_undegraded":"2026-03-09T14:01:01.021665+0000","last_fullsized":"2026-03-09T14:01:01.021665+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:06:04.376656+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.3","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.608188+0000","last_change":"2026-03-09T14:00:54.147021+0000","last_active":"2026-03-09T14:00:57.608188+0000","last_peered":"2026-03-09T14:00:57.608188+0000","last_clean":"2026-03-09T14:00:57.608188+0000","last_became_active":"2026-03-09T14:00:54.146860+0000","last_became_peered":"2026-03-09T14:00:54.146860+0000","last_unstale":"2026-03-09T14:00:57.608188+0000","last_undegraded":"2026-03-09T14:00:57.608188+0000","last_fullsized":"2026-03-09T14:00:57.608188+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:55:04.417905+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.138147+0000","last_change":"2026-03-09T14:00:56.146036+0000","last_active":"2026-03-09T14:00:57.138147+0000","last_peered":"2026-03-09T14:00:57.138147+0000","last_clean":"2026-03-09T14:00:57.138147+0000","last_became_active":"2026-03-09T14:00:56.145880+0000","last_became_peered":"2026-03-09T14:00:56.145880+0000","last_unstale":"2026-03-09T14:00:57.138147+0000","last_undegraded":"2026-03-09T14:00:57.138147+0000","last_fullsized":"2026-03-09T14:00:57.138147+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:27:12.701649+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.3","version":"54'19","reported_seq":57,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.039740+0000","last_change":"2026-03-09T14:00:52.133325+0000","last_active":"2026-03-09T14:00:58.039740+0000","last_peered":"2026-03-09T14:00:58.039740+0000","last_clean":"2026-03-09T14:00:58.039740+0000","last_became_active":"2026-03-09T14:00:52.133203+0000","last_became_peered":"2026-03-09T14:00:52.133203+0000","last_unstale":"2026-03-09T14:00:58.039740+0000","last_undegraded":"2026-03-09T14:00:58.039740+0000","last_fullsized":"2026-03-09T14:00:58.039740+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:13:43.113266+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,7],"acting":[0,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.4","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125832+0000","last_change":"2026-03-09T14:00:50.116304+0000","last_active":"2026-03-09T14:00:57.125832+0000","last_peered":"2026-03-09T14:00:57.125832+0000","last_clean":"2026-03-09T14:00:57.125832+0000","last_became_active":"2026-03-09T14:00:50.116114+0000","last_became_peered":"2026-03-09T14:00:50.116114+0000","last_unstale":"2026-03-09T14:00:57.125832+0000","last_undegraded":"2026-03-09T14:00:57.125832+0000","last_fullsized":"2026-03-09T14:00:57.125832+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:19:15.901445+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.189086+0000","last_change":"2026-03-09T14:00:54.153898+0000","last_active":"2026-03-09T14:01:01.189086+0000","last_peered":"2026-03-09T14:01:01.189086+0000","last_clean":"2026-03-09T14:01:01.189086+0000","last_became_active":"2026-03-09T14:00:54.153796+0000","last_became_peered":"2026-03-09T14:00:54.153796+0000","last_unstale":"2026-03-09T14:01:01.189086+0000","last_undegraded":"2026-03-09T14:01:01.189086+0000","last_fullsized":"2026-03-09T14:01:01.189086+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:19:54.805292+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125816+0000","last_change":"2026-03-09T14:00:56.634767+0000","last_active":"2026-03-09T14:00:57.125816+0000","last_peered":"2026-03-09T14:00:57.125816+0000","last_clean":"2026-03-09T14:00:57.125816+0000","last_became_active":"2026-03-09T14:00:56.634564+0000","last_became_peered":"2026-03-09T14:00:56.634564+0000","last_unstale":"2026-03-09T14:00:57.125816+0000","last_undegraded":"2026-03-09T14:00:57.125816+0000","last_fullsized":"2026-03-09T14:00:57.125816+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:47:42.732204+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.4","version":"54'28","reported_seq":71,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.055650+0000","last_change":"2026-03-09T14:00:52.128982+0000","last_active":"2026-03-09T14:00:58.055650+0000","last_peered":"2026-03-09T14:00:58.055650+0000","last_clean":"2026-03-09T14:00:58.055650+0000","last_became_active":"2026-03-09T14:00:52.128911+0000","last_became_peered":"2026-03-09T14:00:52.128911+0000","last_unstale":"2026-03-09T14:00:58.055650+0000","last_undegraded":"2026-03-09T14:00:58.055650+0000","last_fullsized":"2026-03-09T14:00:58.055650+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":28,"log_dups_size":0,"ondisk_log_size":28,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:15:34.540498+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":48,"num_read_kb":33,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,3],"acting":[1,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.3","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191109+0000","last_change":"2026-03-09T14:00:50.117793+0000","last_active":"2026-03-09T14:01:01.191109+0000","last_peered":"2026-03-09T14:01:01.191109+0000","last_clean":"2026-03-09T14:01:01.191109+0000","last_became_active":"2026-03-09T14:00:50.117627+0000","last_became_peered":"2026-03-09T14:00:50.117627+0000","last_unstale":"2026-03-09T14:01:01.191109+0000","last_undegraded":"2026-03-09T14:01:01.191109+0000","last_fullsized":"2026-03-09T14:01:01.191109+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:52:05.685526+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"49'2","reported_seq":34,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021747+0000","last_change":"2026-03-09T14:00:52.104230+0000","last_active":"2026-03-09T14:01:01.021747+0000","last_peered":"2026-03-09T14:01:01.021747+0000","last_clean":"2026-03-09T14:01:01.021747+0000","last_became_active":"2026-03-09T14:00:50.102127+0000","last_became_peered":"2026-03-09T14:00:50.102127+0000","last_unstale":"2026-03-09T14:01:01.021747+0000","last_undegraded":"2026-03-09T14:01:01.021747+0000","last_fullsized":"2026-03-09T14:01:01.021747+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:11:24.708206+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00042748999999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137954+0000","last_change":"2026-03-09T14:00:54.138601+0000","last_active":"2026-03-09T14:00:57.137954+0000","last_peered":"2026-03-09T14:00:57.137954+0000","last_clean":"2026-03-09T14:00:57.137954+0000","last_became_active":"2026-03-09T14:00:54.138486+0000","last_became_peered":"2026-03-09T14:00:54.138486+0000","last_unstale":"2026-03-09T14:00:57.137954+0000","last_undegraded":"2026-03-09T14:00:57.137954+0000","last_fullsized":"2026-03-09T14:00:57.137954+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:29:04.810641+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187957+0000","last_change":"2026-03-09T14:00:56.156882+0000","last_active":"2026-03-09T14:01:01.187957+0000","last_peered":"2026-03-09T14:01:01.187957+0000","last_clean":"2026-03-09T14:01:01.187957+0000","last_became_active":"2026-03-09T14:00:56.156683+0000","last_became_peered":"2026-03-09T14:00:56.156683+0000","last_unstale":"2026-03-09T14:01:01.187957+0000","last_undegraded":"2026-03-09T14:01:01.187957+0000","last_fullsized":"2026-03-09T14:01:01.187957+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:03:33.439516+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.7","version":"54'13","reported_seq":48,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.956623+0000","last_change":"2026-03-09T14:00:52.114187+0000","last_active":"2026-03-09T14:00:57.956623+0000","last_peered":"2026-03-09T14:00:57.956623+0000","last_clean":"2026-03-09T14:00:57.956623+0000","last_became_active":"2026-03-09T14:00:52.114105+0000","last_became_peered":"2026-03-09T14:00:52.114105+0000","last_unstale":"2026-03-09T14:00:57.956623+0000","last_undegraded":"2026-03-09T14:00:57.956623+0000","last_fullsized":"2026-03-09T14:00:57.956623+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:22:31.607401+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.0","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125857+0000","last_change":"2026-03-09T14:00:50.116223+0000","last_active":"2026-03-09T14:00:57.125857+0000","last_peered":"2026-03-09T14:00:57.125857+0000","last_clean":"2026-03-09T14:00:57.125857+0000","last_became_active":"2026-03-09T14:00:50.115995+0000","last_became_peered":"2026-03-09T14:00:50.115995+0000","last_unstale":"2026-03-09T14:00:57.125857+0000","last_undegraded":"2026-03-09T14:00:57.125857+0000","last_fullsized":"2026-03-09T14:00:57.125857+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:37:46.722962+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"47'1","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034672+0000","last_change":"2026-03-09T14:00:52.116177+0000","last_active":"2026-03-09T14:01:01.034672+0000","last_peered":"2026-03-09T14:01:01.034672+0000","last_clean":"2026-03-09T14:01:01.034672+0000","last_became_active":"2026-03-09T14:00:50.118462+0000","last_became_peered":"2026-03-09T14:00:50.118462+0000","last_unstale":"2026-03-09T14:01:01.034672+0000","last_undegraded":"2026-03-09T14:01:01.034672+0000","last_fullsized":"2026-03-09T14:01:01.034672+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:23:53.069926+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019892299999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034762+0000","last_change":"2026-03-09T14:00:54.133283+0000","last_active":"2026-03-09T14:01:01.034762+0000","last_peered":"2026-03-09T14:01:01.034762+0000","last_clean":"2026-03-09T14:01:01.034762+0000","last_became_active":"2026-03-09T14:00:54.133091+0000","last_became_peered":"2026-03-09T14:00:54.133091+0000","last_unstale":"2026-03-09T14:01:01.034762+0000","last_undegraded":"2026-03-09T14:01:01.034762+0000","last_fullsized":"2026-03-09T14:01:01.034762+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:53:27.882227+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159415+0000","last_change":"2026-03-09T14:00:56.635856+0000","last_active":"2026-03-09T14:00:57.159415+0000","last_peered":"2026-03-09T14:00:57.159415+0000","last_clean":"2026-03-09T14:00:57.159415+0000","last_became_active":"2026-03-09T14:00:56.635723+0000","last_became_peered":"2026-03-09T14:00:56.635723+0000","last_unstale":"2026-03-09T14:00:57.159415+0000","last_undegraded":"2026-03-09T14:00:57.159415+0000","last_fullsized":"2026-03-09T14:00:57.159415+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:01:16.080642+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.6","version":"54'12","reported_seq":39,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.977720+0000","last_change":"2026-03-09T14:00:52.120372+0000","last_active":"2026-03-09T14:00:57.977720+0000","last_peered":"2026-03-09T14:00:57.977720+0000","last_clean":"2026-03-09T14:00:57.977720+0000","last_became_active":"2026-03-09T14:00:52.120277+0000","last_became_peered":"2026-03-09T14:00:52.120277+0000","last_unstale":"2026-03-09T14:00:57.977720+0000","last_undegraded":"2026-03-09T14:00:57.977720+0000","last_fullsized":"2026-03-09T14:00:57.977720+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:06:01.080244+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,2],"acting":[0,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137637+0000","last_change":"2026-03-09T14:00:50.117479+0000","last_active":"2026-03-09T14:00:57.137637+0000","last_peered":"2026-03-09T14:00:57.137637+0000","last_clean":"2026-03-09T14:00:57.137637+0000","last_became_active":"2026-03-09T14:00:50.117261+0000","last_became_peered":"2026-03-09T14:00:50.117261+0000","last_unstale":"2026-03-09T14:00:57.137637+0000","last_undegraded":"2026-03-09T14:00:57.137637+0000","last_fullsized":"2026-03-09T14:00:57.137637+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:36:38.275968+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"54'5","reported_seq":41,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:59.953618+0000","last_change":"2026-03-09T14:00:52.110129+0000","last_active":"2026-03-09T14:00:59.953618+0000","last_peered":"2026-03-09T14:00:59.953618+0000","last_clean":"2026-03-09T14:00:59.953618+0000","last_became_active":"2026-03-09T14:00:50.132504+0000","last_became_peered":"2026-03-09T14:00:50.132504+0000","last_unstale":"2026-03-09T14:00:59.953618+0000","last_undegraded":"2026-03-09T14:00:59.953618+0000","last_fullsized":"2026-03-09T14:00:59.953618+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:34:20.330004+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00032990800000000001,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":8,"num_read_kb":3,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"5.7","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021473+0000","last_change":"2026-03-09T14:00:54.129625+0000","last_active":"2026-03-09T14:01:01.021473+0000","last_peered":"2026-03-09T14:01:01.021473+0000","last_clean":"2026-03-09T14:01:01.021473+0000","last_became_active":"2026-03-09T14:00:54.129525+0000","last_became_peered":"2026-03-09T14:00:54.129525+0000","last_unstale":"2026-03-09T14:01:01.021473+0000","last_undegraded":"2026-03-09T14:01:01.021473+0000","last_fullsized":"2026-03-09T14:01:01.021473+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:11:16.026801+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125694+0000","last_change":"2026-03-09T14:00:56.149382+0000","last_active":"2026-03-09T14:00:57.125694+0000","last_peered":"2026-03-09T14:00:57.125694+0000","last_clean":"2026-03-09T14:00:57.125694+0000","last_became_active":"2026-03-09T14:00:56.149261+0000","last_became_peered":"2026-03-09T14:00:56.149261+0000","last_unstale":"2026-03-09T14:00:57.125694+0000","last_undegraded":"2026-03-09T14:00:57.125694+0000","last_fullsized":"2026-03-09T14:00:57.125694+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:04:11.080870+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.5","version":"54'16","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188430+0000","last_change":"2026-03-09T14:00:52.161099+0000","last_active":"2026-03-09T14:01:01.188430+0000","last_peered":"2026-03-09T14:01:01.188430+0000","last_clean":"2026-03-09T14:01:01.188430+0000","last_became_active":"2026-03-09T14:00:52.161016+0000","last_became_peered":"2026-03-09T14:00:52.161016+0000","last_unstale":"2026-03-09T14:01:01.188430+0000","last_undegraded":"2026-03-09T14:01:01.188430+0000","last_fullsized":"2026-03-09T14:01:01.188430+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:53:21.492144+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.2","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188214+0000","last_change":"2026-03-09T14:00:50.125364+0000","last_active":"2026-03-09T14:01:01.188214+0000","last_peered":"2026-03-09T14:01:01.188214+0000","last_clean":"2026-03-09T14:01:01.188214+0000","last_became_active":"2026-03-09T14:00:50.124974+0000","last_became_peered":"2026-03-09T14:00:50.124974+0000","last_unstale":"2026-03-09T14:01:01.188214+0000","last_undegraded":"2026-03-09T14:01:01.188214+0000","last_fullsized":"2026-03-09T14:01:01.188214+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:30:36.803487+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"1.0","version":"18'32","reported_seq":35,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159895+0000","last_change":"2026-03-09T14:00:48.395001+0000","last_active":"2026-03-09T14:00:57.159895+0000","last_peered":"2026-03-09T14:00:57.159895+0000","last_clean":"2026-03-09T14:00:57.159895+0000","last_became_active":"2026-03-09T14:00:48.087092+0000","last_became_peered":"2026-03-09T14:00:48.087092+0000","last_unstale":"2026-03-09T14:00:57.159895+0000","last_undegraded":"2026-03-09T14:00:57.159895+0000","last_fullsized":"2026-03-09T14:00:57.159895+0000","mapping_epoch":44,"log_start":"0'0","ondisk_log_start":"0'0","created":17,"last_epoch_clean":45,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T13:59:57.511953+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T13:59:57.511953+0000","last_clean_scrub_stamp":"2026-03-09T13:59:57.511953+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:07:15.605892+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159936+0000","last_change":"2026-03-09T14:00:54.142968+0000","last_active":"2026-03-09T14:00:57.159936+0000","last_peered":"2026-03-09T14:00:57.159936+0000","last_clean":"2026-03-09T14:00:57.159936+0000","last_became_active":"2026-03-09T14:00:54.142859+0000","last_became_peered":"2026-03-09T14:00:54.142859+0000","last_unstale":"2026-03-09T14:00:57.159936+0000","last_undegraded":"2026-03-09T14:00:57.159936+0000","last_fullsized":"2026-03-09T14:00:57.159936+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:37:56.103462+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021236+0000","last_change":"2026-03-09T14:00:56.151080+0000","last_active":"2026-03-09T14:01:01.021236+0000","last_peered":"2026-03-09T14:01:01.021236+0000","last_clean":"2026-03-09T14:01:01.021236+0000","last_became_active":"2026-03-09T14:00:56.150858+0000","last_became_peered":"2026-03-09T14:00:56.150858+0000","last_unstale":"2026-03-09T14:01:01.021236+0000","last_undegraded":"2026-03-09T14:01:01.021236+0000","last_fullsized":"2026-03-09T14:01:01.021236+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:56:57.461325+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"4.e","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191037+0000","last_change":"2026-03-09T14:00:52.163958+0000","last_active":"2026-03-09T14:01:01.191037+0000","last_peered":"2026-03-09T14:01:01.191037+0000","last_clean":"2026-03-09T14:01:01.191037+0000","last_became_active":"2026-03-09T14:00:52.163868+0000","last_became_peered":"2026-03-09T14:00:52.163868+0000","last_unstale":"2026-03-09T14:01:01.191037+0000","last_undegraded":"2026-03-09T14:01:01.191037+0000","last_fullsized":"2026-03-09T14:01:01.191037+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:39:36.393630+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.9","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191029+0000","last_change":"2026-03-09T14:00:50.121159+0000","last_active":"2026-03-09T14:01:01.191029+0000","last_peered":"2026-03-09T14:01:01.191029+0000","last_clean":"2026-03-09T14:01:01.191029+0000","last_became_active":"2026-03-09T14:00:50.120617+0000","last_became_peered":"2026-03-09T14:00:50.120617+0000","last_unstale":"2026-03-09T14:01:01.191029+0000","last_undegraded":"2026-03-09T14:01:01.191029+0000","last_fullsized":"2026-03-09T14:01:01.191029+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:34:58.667845+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.022044+0000","last_change":"2026-03-09T14:00:54.146444+0000","last_active":"2026-03-09T14:01:01.022044+0000","last_peered":"2026-03-09T14:01:01.022044+0000","last_clean":"2026-03-09T14:01:01.022044+0000","last_became_active":"2026-03-09T14:00:54.146336+0000","last_became_peered":"2026-03-09T14:00:54.146336+0000","last_unstale":"2026-03-09T14:01:01.022044+0000","last_undegraded":"2026-03-09T14:01:01.022044+0000","last_fullsized":"2026-03-09T14:01:01.022044+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:21:06.037240+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187564+0000","last_change":"2026-03-09T14:00:56.636903+0000","last_active":"2026-03-09T14:01:01.187564+0000","last_peered":"2026-03-09T14:01:01.187564+0000","last_clean":"2026-03-09T14:01:01.187564+0000","last_became_active":"2026-03-09T14:00:56.634798+0000","last_became_peered":"2026-03-09T14:00:56.634798+0000","last_unstale":"2026-03-09T14:01:01.187564+0000","last_undegraded":"2026-03-09T14:01:01.187564+0000","last_fullsized":"2026-03-09T14:01:01.187564+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:25:39.147077+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.d","version":"54'17","reported_seq":51,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190790+0000","last_change":"2026-03-09T14:00:52.164490+0000","last_active":"2026-03-09T14:01:01.190790+0000","last_peered":"2026-03-09T14:01:01.190790+0000","last_clean":"2026-03-09T14:01:01.190790+0000","last_became_active":"2026-03-09T14:00:52.164421+0000","last_became_peered":"2026-03-09T14:00:52.164421+0000","last_unstale":"2026-03-09T14:01:01.190790+0000","last_undegraded":"2026-03-09T14:01:01.190790+0000","last_fullsized":"2026-03-09T14:01:01.190790+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:52:43.959871+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,1],"acting":[4,2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.a","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188543+0000","last_change":"2026-03-09T14:00:50.114758+0000","last_active":"2026-03-09T14:01:01.188543+0000","last_peered":"2026-03-09T14:01:01.188543+0000","last_clean":"2026-03-09T14:01:01.188543+0000","last_became_active":"2026-03-09T14:00:50.112071+0000","last_became_peered":"2026-03-09T14:00:50.112071+0000","last_unstale":"2026-03-09T14:01:01.188543+0000","last_undegraded":"2026-03-09T14:01:01.188543+0000","last_fullsized":"2026-03-09T14:01:01.188543+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:53:52.666574+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125684+0000","last_change":"2026-03-09T14:00:54.152303+0000","last_active":"2026-03-09T14:00:57.125684+0000","last_peered":"2026-03-09T14:00:57.125684+0000","last_clean":"2026-03-09T14:00:57.125684+0000","last_became_active":"2026-03-09T14:00:54.152198+0000","last_became_peered":"2026-03-09T14:00:54.152198+0000","last_unstale":"2026-03-09T14:00:57.125684+0000","last_undegraded":"2026-03-09T14:00:57.125684+0000","last_fullsized":"2026-03-09T14:00:57.125684+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:10:22.025419+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034810+0000","last_change":"2026-03-09T14:00:56.148068+0000","last_active":"2026-03-09T14:01:01.034810+0000","last_peered":"2026-03-09T14:01:01.034810+0000","last_clean":"2026-03-09T14:01:01.034810+0000","last_became_active":"2026-03-09T14:00:56.147965+0000","last_became_peered":"2026-03-09T14:00:56.147965+0000","last_unstale":"2026-03-09T14:01:01.034810+0000","last_undegraded":"2026-03-09T14:01:01.034810+0000","last_fullsized":"2026-03-09T14:01:01.034810+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:36:04.892964+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.c","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191455+0000","last_change":"2026-03-09T14:00:52.147537+0000","last_active":"2026-03-09T14:01:01.191455+0000","last_peered":"2026-03-09T14:01:01.191455+0000","last_clean":"2026-03-09T14:01:01.191455+0000","last_became_active":"2026-03-09T14:00:52.145967+0000","last_became_peered":"2026-03-09T14:00:52.145967+0000","last_unstale":"2026-03-09T14:01:01.191455+0000","last_undegraded":"2026-03-09T14:01:01.191455+0000","last_fullsized":"2026-03-09T14:01:01.191455+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:45:45.425488+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,6],"acting":[4,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.b","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188021+0000","last_change":"2026-03-09T14:00:50.135234+0000","last_active":"2026-03-09T14:01:01.188021+0000","last_peered":"2026-03-09T14:01:01.188021+0000","last_clean":"2026-03-09T14:01:01.188021+0000","last_became_active":"2026-03-09T14:00:50.122241+0000","last_became_peered":"2026-03-09T14:00:50.122241+0000","last_unstale":"2026-03-09T14:01:01.188021+0000","last_undegraded":"2026-03-09T14:01:01.188021+0000","last_fullsized":"2026-03-09T14:01:01.188021+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:17:22.617123+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.d","version":"54'8","reported_seq":30,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034974+0000","last_change":"2026-03-09T14:00:54.133340+0000","last_active":"2026-03-09T14:01:01.034974+0000","last_peered":"2026-03-09T14:01:01.034974+0000","last_clean":"2026-03-09T14:01:01.034974+0000","last_became_active":"2026-03-09T14:00:54.133218+0000","last_became_peered":"2026-03-09T14:00:54.133218+0000","last_unstale":"2026-03-09T14:01:01.034974+0000","last_undegraded":"2026-03-09T14:01:01.034974+0000","last_fullsized":"2026-03-09T14:01:01.034974+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:30:05.926433+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191239+0000","last_change":"2026-03-09T14:00:56.148535+0000","last_active":"2026-03-09T14:01:01.191239+0000","last_peered":"2026-03-09T14:01:01.191239+0000","last_clean":"2026-03-09T14:01:01.191239+0000","last_became_active":"2026-03-09T14:00:56.142713+0000","last_became_peered":"2026-03-09T14:00:56.142713+0000","last_unstale":"2026-03-09T14:01:01.191239+0000","last_undegraded":"2026-03-09T14:01:01.191239+0000","last_fullsized":"2026-03-09T14:01:01.191239+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:21:23.550254+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.b","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:58.033820+0000","last_change":"2026-03-09T14:00:52.132547+0000","last_active":"2026-03-09T14:00:58.033820+0000","last_peered":"2026-03-09T14:00:58.033820+0000","last_clean":"2026-03-09T14:00:58.033820+0000","last_became_active":"2026-03-09T14:00:52.132444+0000","last_became_peered":"2026-03-09T14:00:52.132444+0000","last_unstale":"2026-03-09T14:00:58.033820+0000","last_undegraded":"2026-03-09T14:00:58.033820+0000","last_fullsized":"2026-03-09T14:00:58.033820+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:50:33.124223+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.c","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021581+0000","last_change":"2026-03-09T14:00:50.120381+0000","last_active":"2026-03-09T14:01:01.021581+0000","last_peered":"2026-03-09T14:01:01.021581+0000","last_clean":"2026-03-09T14:01:01.021581+0000","last_became_active":"2026-03-09T14:00:50.116004+0000","last_became_peered":"2026-03-09T14:00:50.116004+0000","last_unstale":"2026-03-09T14:01:01.021581+0000","last_undegraded":"2026-03-09T14:01:01.021581+0000","last_fullsized":"2026-03-09T14:01:01.021581+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:03:16.730400+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034869+0000","last_change":"2026-03-09T14:00:54.137426+0000","last_active":"2026-03-09T14:01:01.034869+0000","last_peered":"2026-03-09T14:01:01.034869+0000","last_clean":"2026-03-09T14:01:01.034869+0000","last_became_active":"2026-03-09T14:00:54.137326+0000","last_became_peered":"2026-03-09T14:00:54.137326+0000","last_unstale":"2026-03-09T14:01:01.034869+0000","last_undegraded":"2026-03-09T14:01:01.034869+0000","last_fullsized":"2026-03-09T14:01:01.034869+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:02:08.599363+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.138017+0000","last_change":"2026-03-09T14:00:56.155493+0000","last_active":"2026-03-09T14:00:57.138017+0000","last_peered":"2026-03-09T14:00:57.138017+0000","last_clean":"2026-03-09T14:00:57.138017+0000","last_became_active":"2026-03-09T14:00:56.155369+0000","last_became_peered":"2026-03-09T14:00:56.155369+0000","last_unstale":"2026-03-09T14:00:57.138017+0000","last_undegraded":"2026-03-09T14:00:57.138017+0000","last_fullsized":"2026-03-09T14:00:57.138017+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:22:44.809797+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.a","version":"54'19","reported_seq":54,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188620+0000","last_change":"2026-03-09T14:00:52.160927+0000","last_active":"2026-03-09T14:01:01.188620+0000","last_peered":"2026-03-09T14:01:01.188620+0000","last_clean":"2026-03-09T14:01:01.188620+0000","last_became_active":"2026-03-09T14:00:52.160815+0000","last_became_peered":"2026-03-09T14:00:52.160815+0000","last_unstale":"2026-03-09T14:01:01.188620+0000","last_undegraded":"2026-03-09T14:01:01.188620+0000","last_fullsized":"2026-03-09T14:01:01.188620+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:20:10.337720+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,1,7],"acting":[6,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.d","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159827+0000","last_change":"2026-03-09T14:00:50.127310+0000","last_active":"2026-03-09T14:00:57.159827+0000","last_peered":"2026-03-09T14:00:57.159827+0000","last_clean":"2026-03-09T14:00:57.159827+0000","last_became_active":"2026-03-09T14:00:50.127218+0000","last_became_peered":"2026-03-09T14:00:50.127218+0000","last_unstale":"2026-03-09T14:00:57.159827+0000","last_undegraded":"2026-03-09T14:00:57.159827+0000","last_fullsized":"2026-03-09T14:00:57.159827+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:32:41.014774+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034929+0000","last_change":"2026-03-09T14:00:54.132366+0000","last_active":"2026-03-09T14:01:01.034929+0000","last_peered":"2026-03-09T14:01:01.034929+0000","last_clean":"2026-03-09T14:01:01.034929+0000","last_became_active":"2026-03-09T14:00:54.132275+0000","last_became_peered":"2026-03-09T14:00:54.132275+0000","last_unstale":"2026-03-09T14:01:01.034929+0000","last_undegraded":"2026-03-09T14:01:01.034929+0000","last_fullsized":"2026-03-09T14:01:01.034929+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:35:51.237218+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159806+0000","last_change":"2026-03-09T14:00:56.154569+0000","last_active":"2026-03-09T14:00:57.159806+0000","last_peered":"2026-03-09T14:00:57.159806+0000","last_clean":"2026-03-09T14:00:57.159806+0000","last_became_active":"2026-03-09T14:00:56.154230+0000","last_became_peered":"2026-03-09T14:00:56.154230+0000","last_unstale":"2026-03-09T14:00:57.159806+0000","last_undegraded":"2026-03-09T14:00:57.159806+0000","last_fullsized":"2026-03-09T14:00:57.159806+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:15:34.971763+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.9","version":"54'12","reported_seq":46,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.191240+0000","last_change":"2026-03-09T14:00:52.163664+0000","last_active":"2026-03-09T14:01:01.191240+0000","last_peered":"2026-03-09T14:01:01.191240+0000","last_clean":"2026-03-09T14:01:01.191240+0000","last_became_active":"2026-03-09T14:00:52.162930+0000","last_became_peered":"2026-03-09T14:00:52.162930+0000","last_unstale":"2026-03-09T14:01:01.191240+0000","last_undegraded":"2026-03-09T14:01:01.191240+0000","last_fullsized":"2026-03-09T14:01:01.191240+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:14:58.698926+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,3],"acting":[4,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.e","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159698+0000","last_change":"2026-03-09T14:00:50.132760+0000","last_active":"2026-03-09T14:00:57.159698+0000","last_peered":"2026-03-09T14:00:57.159698+0000","last_clean":"2026-03-09T14:00:57.159698+0000","last_became_active":"2026-03-09T14:00:50.132628+0000","last_became_peered":"2026-03-09T14:00:50.132628+0000","last_unstale":"2026-03-09T14:00:57.159698+0000","last_undegraded":"2026-03-09T14:00:57.159698+0000","last_fullsized":"2026-03-09T14:00:57.159698+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:58:51.476383+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.034843+0000","last_change":"2026-03-09T14:00:54.132344+0000","last_active":"2026-03-09T14:01:01.034843+0000","last_peered":"2026-03-09T14:01:01.034843+0000","last_clean":"2026-03-09T14:01:01.034843+0000","last_became_active":"2026-03-09T14:00:54.132258+0000","last_became_peered":"2026-03-09T14:00:54.132258+0000","last_unstale":"2026-03-09T14:01:01.034843+0000","last_undegraded":"2026-03-09T14:01:01.034843+0000","last_fullsized":"2026-03-09T14:01:01.034843+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:42:35.661374+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187732+0000","last_change":"2026-03-09T14:00:56.156981+0000","last_active":"2026-03-09T14:01:01.187732+0000","last_peered":"2026-03-09T14:01:01.187732+0000","last_clean":"2026-03-09T14:01:01.187732+0000","last_became_active":"2026-03-09T14:00:56.156600+0000","last_became_peered":"2026-03-09T14:00:56.156600+0000","last_unstale":"2026-03-09T14:01:01.187732+0000","last_undegraded":"2026-03-09T14:01:01.187732+0000","last_fullsized":"2026-03-09T14:01:01.187732+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:31:49.023626+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.8","version":"54'15","reported_seq":48,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021851+0000","last_change":"2026-03-09T14:00:52.133676+0000","last_active":"2026-03-09T14:01:01.021851+0000","last_peered":"2026-03-09T14:01:01.021851+0000","last_clean":"2026-03-09T14:01:01.021851+0000","last_became_active":"2026-03-09T14:00:52.133525+0000","last_became_peered":"2026-03-09T14:00:52.133525+0000","last_unstale":"2026-03-09T14:01:01.021851+0000","last_undegraded":"2026-03-09T14:01:01.021851+0000","last_fullsized":"2026-03-09T14:01:01.021851+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:48:09.694923+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,6],"acting":[5,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.f","version":"47'3","reported_seq":43,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.478514+0000","last_change":"2026-03-09T14:00:50.119383+0000","last_active":"2026-03-09T14:00:57.478514+0000","last_peered":"2026-03-09T14:00:57.478514+0000","last_clean":"2026-03-09T14:00:57.478514+0000","last_became_active":"2026-03-09T14:00:50.119246+0000","last_became_peered":"2026-03-09T14:00:50.119246+0000","last_unstale":"2026-03-09T14:00:57.478514+0000","last_undegraded":"2026-03-09T14:00:57.478514+0000","last_fullsized":"2026-03-09T14:00:57.478514+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":3,"log_dups_size":0,"ondisk_log_size":3,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:54:45.511440+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":528,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":17,"num_read_kb":17,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.9","version":"54'8","reported_seq":28,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.617072+0000","last_change":"2026-03-09T14:00:54.157391+0000","last_active":"2026-03-09T14:00:57.617072+0000","last_peered":"2026-03-09T14:00:57.617072+0000","last_clean":"2026-03-09T14:00:57.617072+0000","last_became_active":"2026-03-09T14:00:54.157042+0000","last_became_peered":"2026-03-09T14:00:54.157042+0000","last_unstale":"2026-03-09T14:00:57.617072+0000","last_undegraded":"2026-03-09T14:00:57.617072+0000","last_fullsized":"2026-03-09T14:00:57.617072+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:56:01.518933+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021822+0000","last_change":"2026-03-09T14:00:56.637208+0000","last_active":"2026-03-09T14:01:01.021822+0000","last_peered":"2026-03-09T14:01:01.021822+0000","last_clean":"2026-03-09T14:01:01.021822+0000","last_became_active":"2026-03-09T14:00:56.637101+0000","last_became_peered":"2026-03-09T14:00:56.637101+0000","last_unstale":"2026-03-09T14:01:01.021822+0000","last_undegraded":"2026-03-09T14:01:01.021822+0000","last_fullsized":"2026-03-09T14:01:01.021822+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:09:04.259304+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.10","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188522+0000","last_change":"2026-03-09T14:00:50.115631+0000","last_active":"2026-03-09T14:01:01.188522+0000","last_peered":"2026-03-09T14:01:01.188522+0000","last_clean":"2026-03-09T14:01:01.188522+0000","last_became_active":"2026-03-09T14:00:50.115411+0000","last_became_peered":"2026-03-09T14:00:50.115411+0000","last_unstale":"2026-03-09T14:01:01.188522+0000","last_undegraded":"2026-03-09T14:01:01.188522+0000","last_fullsized":"2026-03-09T14:01:01.188522+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:35:54.222544+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.17","version":"54'6","reported_seq":32,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188576+0000","last_change":"2026-03-09T14:00:52.160532+0000","last_active":"2026-03-09T14:01:01.188576+0000","last_peered":"2026-03-09T14:01:01.188576+0000","last_clean":"2026-03-09T14:01:01.188576+0000","last_became_active":"2026-03-09T14:00:52.159413+0000","last_became_peered":"2026-03-09T14:00:52.159413+0000","last_unstale":"2026-03-09T14:01:01.188576+0000","last_undegraded":"2026-03-09T14:01:01.188576+0000","last_fullsized":"2026-03-09T14:01:01.188576+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:07:07.603055+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021622+0000","last_change":"2026-03-09T14:00:54.134808+0000","last_active":"2026-03-09T14:01:01.021622+0000","last_peered":"2026-03-09T14:01:01.021622+0000","last_clean":"2026-03-09T14:01:01.021622+0000","last_became_active":"2026-03-09T14:00:54.134551+0000","last_became_peered":"2026-03-09T14:00:54.134551+0000","last_unstale":"2026-03-09T14:01:01.021622+0000","last_undegraded":"2026-03-09T14:01:01.021622+0000","last_fullsized":"2026-03-09T14:01:01.021622+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:21:26.040903+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.160181+0000","last_change":"2026-03-09T14:00:56.635796+0000","last_active":"2026-03-09T14:00:57.160181+0000","last_peered":"2026-03-09T14:00:57.160181+0000","last_clean":"2026-03-09T14:00:57.160181+0000","last_became_active":"2026-03-09T14:00:56.635587+0000","last_became_peered":"2026-03-09T14:00:56.635587+0000","last_unstale":"2026-03-09T14:00:57.160181+0000","last_undegraded":"2026-03-09T14:00:57.160181+0000","last_fullsized":"2026-03-09T14:00:57.160181+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:19:19.163679+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"4.16","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.749348+0000","last_change":"2026-03-09T14:00:52.127333+0000","last_active":"2026-03-09T14:00:57.749348+0000","last_peered":"2026-03-09T14:00:57.749348+0000","last_clean":"2026-03-09T14:00:57.749348+0000","last_became_active":"2026-03-09T14:00:52.127097+0000","last_became_peered":"2026-03-09T14:00:52.127097+0000","last_unstale":"2026-03-09T14:00:57.749348+0000","last_undegraded":"2026-03-09T14:00:57.749348+0000","last_fullsized":"2026-03-09T14:00:57.749348+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T16:47:11.041277+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,7],"acting":[0,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.11","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159581+0000","last_change":"2026-03-09T14:00:50.116922+0000","last_active":"2026-03-09T14:00:57.159581+0000","last_peered":"2026-03-09T14:00:57.159581+0000","last_clean":"2026-03-09T14:00:57.159581+0000","last_became_active":"2026-03-09T14:00:50.115220+0000","last_became_peered":"2026-03-09T14:00:50.115220+0000","last_unstale":"2026-03-09T14:00:57.159581+0000","last_undegraded":"2026-03-09T14:00:57.159581+0000","last_fullsized":"2026-03-09T14:00:57.159581+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:10:25.689629+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188677+0000","last_change":"2026-03-09T14:00:54.146358+0000","last_active":"2026-03-09T14:01:01.188677+0000","last_peered":"2026-03-09T14:01:01.188677+0000","last_clean":"2026-03-09T14:01:01.188677+0000","last_became_active":"2026-03-09T14:00:54.144879+0000","last_became_peered":"2026-03-09T14:00:54.144879+0000","last_unstale":"2026-03-09T14:01:01.188677+0000","last_undegraded":"2026-03-09T14:01:01.188677+0000","last_fullsized":"2026-03-09T14:01:01.188677+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:53:17.258641+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.035068+0000","last_change":"2026-03-09T14:00:56.153987+0000","last_active":"2026-03-09T14:01:01.035068+0000","last_peered":"2026-03-09T14:01:01.035068+0000","last_clean":"2026-03-09T14:01:01.035068+0000","last_became_active":"2026-03-09T14:00:56.153881+0000","last_became_peered":"2026-03-09T14:00:56.153881+0000","last_unstale":"2026-03-09T14:01:01.035068+0000","last_undegraded":"2026-03-09T14:01:01.035068+0000","last_fullsized":"2026-03-09T14:01:01.035068+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:58:43.420456+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"4.15","version":"54'9","reported_seq":39,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021896+0000","last_change":"2026-03-09T14:00:52.132726+0000","last_active":"2026-03-09T14:01:01.021896+0000","last_peered":"2026-03-09T14:01:01.021896+0000","last_clean":"2026-03-09T14:01:01.021896+0000","last_became_active":"2026-03-09T14:00:52.132586+0000","last_became_peered":"2026-03-09T14:00:52.132586+0000","last_unstale":"2026-03-09T14:01:01.021896+0000","last_undegraded":"2026-03-09T14:01:01.021896+0000","last_fullsized":"2026-03-09T14:01:01.021896+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:40:50.662955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,3],"acting":[5,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.12","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137588+0000","last_change":"2026-03-09T14:00:50.117345+0000","last_active":"2026-03-09T14:00:57.137588+0000","last_peered":"2026-03-09T14:00:57.137588+0000","last_clean":"2026-03-09T14:00:57.137588+0000","last_became_active":"2026-03-09T14:00:50.116949+0000","last_became_peered":"2026-03-09T14:00:50.116949+0000","last_unstale":"2026-03-09T14:00:57.137588+0000","last_undegraded":"2026-03-09T14:00:57.137588+0000","last_fullsized":"2026-03-09T14:00:57.137588+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:58:28.202939+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"54'8","reported_seq":33,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188356+0000","last_change":"2026-03-09T14:00:54.151538+0000","last_active":"2026-03-09T14:01:01.188356+0000","last_peered":"2026-03-09T14:01:01.188356+0000","last_clean":"2026-03-09T14:01:01.188356+0000","last_became_active":"2026-03-09T14:00:54.151387+0000","last_became_peered":"2026-03-09T14:00:54.151387+0000","last_unstale":"2026-03-09T14:01:01.188356+0000","last_undegraded":"2026-03-09T14:01:01.188356+0000","last_fullsized":"2026-03-09T14:01:01.188356+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:29:45.507392+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190399+0000","last_change":"2026-03-09T14:00:56.149911+0000","last_active":"2026-03-09T14:01:01.190399+0000","last_peered":"2026-03-09T14:01:01.190399+0000","last_clean":"2026-03-09T14:01:01.190399+0000","last_became_active":"2026-03-09T14:00:56.148413+0000","last_became_peered":"2026-03-09T14:00:56.148413+0000","last_unstale":"2026-03-09T14:01:01.190399+0000","last_undegraded":"2026-03-09T14:01:01.190399+0000","last_fullsized":"2026-03-09T14:01:01.190399+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:45:40.538100+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"4.14","version":"54'10","reported_seq":38,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188537+0000","last_change":"2026-03-09T14:00:52.160594+0000","last_active":"2026-03-09T14:01:01.188537+0000","last_peered":"2026-03-09T14:01:01.188537+0000","last_clean":"2026-03-09T14:01:01.188537+0000","last_became_active":"2026-03-09T14:00:52.159173+0000","last_became_peered":"2026-03-09T14:00:52.159173+0000","last_unstale":"2026-03-09T14:01:01.188537+0000","last_undegraded":"2026-03-09T14:01:01.188537+0000","last_fullsized":"2026-03-09T14:01:01.188537+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:14:28.480143+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.13","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159515+0000","last_change":"2026-03-09T14:00:50.120833+0000","last_active":"2026-03-09T14:00:57.159515+0000","last_peered":"2026-03-09T14:00:57.159515+0000","last_clean":"2026-03-09T14:00:57.159515+0000","last_became_active":"2026-03-09T14:00:50.120714+0000","last_became_peered":"2026-03-09T14:00:50.120714+0000","last_unstale":"2026-03-09T14:00:57.159515+0000","last_undegraded":"2026-03-09T14:00:57.159515+0000","last_fullsized":"2026-03-09T14:00:57.159515+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:18:34.913502+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.15","version":"54'8","reported_seq":30,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.022070+0000","last_change":"2026-03-09T14:00:54.135125+0000","last_active":"2026-03-09T14:01:01.022070+0000","last_peered":"2026-03-09T14:01:01.022070+0000","last_clean":"2026-03-09T14:01:01.022070+0000","last_became_active":"2026-03-09T14:00:54.135005+0000","last_became_peered":"2026-03-09T14:00:54.135005+0000","last_unstale":"2026-03-09T14:01:01.022070+0000","last_undegraded":"2026-03-09T14:01:01.022070+0000","last_fullsized":"2026-03-09T14:01:01.022070+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":8,"log_dups_size":0,"ondisk_log_size":8,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:06:42.911471+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137499+0000","last_change":"2026-03-09T14:00:56.154968+0000","last_active":"2026-03-09T14:00:57.137499+0000","last_peered":"2026-03-09T14:00:57.137499+0000","last_clean":"2026-03-09T14:00:57.137499+0000","last_became_active":"2026-03-09T14:00:56.154874+0000","last_became_peered":"2026-03-09T14:00:56.154874+0000","last_unstale":"2026-03-09T14:00:57.137499+0000","last_undegraded":"2026-03-09T14:00:57.137499+0000","last_fullsized":"2026-03-09T14:00:57.137499+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T23:40:04.605112+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.13","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190991+0000","last_change":"2026-03-09T14:00:52.164241+0000","last_active":"2026-03-09T14:01:01.190991+0000","last_peered":"2026-03-09T14:01:01.190991+0000","last_clean":"2026-03-09T14:01:01.190991+0000","last_became_active":"2026-03-09T14:00:52.164150+0000","last_became_peered":"2026-03-09T14:00:52.164150+0000","last_unstale":"2026-03-09T14:01:01.190991+0000","last_undegraded":"2026-03-09T14:01:01.190991+0000","last_fullsized":"2026-03-09T14:01:01.190991+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:41:50.296756+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.14","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.190967+0000","last_change":"2026-03-09T14:00:50.113905+0000","last_active":"2026-03-09T14:01:01.190967+0000","last_peered":"2026-03-09T14:01:01.190967+0000","last_clean":"2026-03-09T14:01:01.190967+0000","last_became_active":"2026-03-09T14:00:50.113428+0000","last_became_peered":"2026-03-09T14:00:50.113428+0000","last_unstale":"2026-03-09T14:01:01.190967+0000","last_undegraded":"2026-03-09T14:01:01.190967+0000","last_fullsized":"2026-03-09T14:01:01.190967+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T19:26:12.423394+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.125242+0000","last_change":"2026-03-09T14:00:54.131567+0000","last_active":"2026-03-09T14:00:57.125242+0000","last_peered":"2026-03-09T14:00:57.125242+0000","last_clean":"2026-03-09T14:00:57.125242+0000","last_became_active":"2026-03-09T14:00:54.130774+0000","last_became_peered":"2026-03-09T14:00:54.130774+0000","last_unstale":"2026-03-09T14:00:57.125242+0000","last_undegraded":"2026-03-09T14:00:57.125242+0000","last_fullsized":"2026-03-09T14:00:57.125242+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T14:40:11.888601+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.187596+0000","last_change":"2026-03-09T14:00:56.145531+0000","last_active":"2026-03-09T14:01:01.187596+0000","last_peered":"2026-03-09T14:01:01.187596+0000","last_clean":"2026-03-09T14:01:01.187596+0000","last_became_active":"2026-03-09T14:00:56.145416+0000","last_became_peered":"2026-03-09T14:00:56.145416+0000","last_unstale":"2026-03-09T14:01:01.187596+0000","last_undegraded":"2026-03-09T14:01:01.187596+0000","last_fullsized":"2026-03-09T14:01:01.187596+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:56:44.578058+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.12","version":"54'9","reported_seq":37,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.920002+0000","last_change":"2026-03-09T14:00:52.123624+0000","last_active":"2026-03-09T14:00:57.920002+0000","last_peered":"2026-03-09T14:00:57.920002+0000","last_clean":"2026-03-09T14:00:57.920002+0000","last_became_active":"2026-03-09T14:00:52.121963+0000","last_became_peered":"2026-03-09T14:00:52.121963+0000","last_unstale":"2026-03-09T14:00:57.920002+0000","last_undegraded":"2026-03-09T14:00:57.920002+0000","last_fullsized":"2026-03-09T14:00:57.920002+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:14:31.493347+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.15","version":"0'0","reported_seq":25,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159475+0000","last_change":"2026-03-09T14:00:50.118082+0000","last_active":"2026-03-09T14:00:57.159475+0000","last_peered":"2026-03-09T14:00:57.159475+0000","last_clean":"2026-03-09T14:00:57.159475+0000","last_became_active":"2026-03-09T14:00:50.117980+0000","last_became_peered":"2026-03-09T14:00:50.117980+0000","last_unstale":"2026-03-09T14:00:57.159475+0000","last_undegraded":"2026-03-09T14:00:57.159475+0000","last_fullsized":"2026-03-09T14:00:57.159475+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T00:47:57.554137+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188236+0000","last_change":"2026-03-09T14:00:54.136331+0000","last_active":"2026-03-09T14:01:01.188236+0000","last_peered":"2026-03-09T14:01:01.188236+0000","last_clean":"2026-03-09T14:01:01.188236+0000","last_became_active":"2026-03-09T14:00:54.136175+0000","last_became_peered":"2026-03-09T14:00:54.136175+0000","last_unstale":"2026-03-09T14:01:01.188236+0000","last_undegraded":"2026-03-09T14:01:01.188236+0000","last_fullsized":"2026-03-09T14:01:01.188236+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:08:42.121733+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137388+0000","last_change":"2026-03-09T14:00:56.143759+0000","last_active":"2026-03-09T14:00:57.137388+0000","last_peered":"2026-03-09T14:00:57.137388+0000","last_clean":"2026-03-09T14:00:57.137388+0000","last_became_active":"2026-03-09T14:00:56.143664+0000","last_became_peered":"2026-03-09T14:00:56.143664+0000","last_unstale":"2026-03-09T14:00:57.137388+0000","last_undegraded":"2026-03-09T14:00:57.137388+0000","last_fullsized":"2026-03-09T14:00:57.137388+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T18:46:55.622334+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"4.11","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188496+0000","last_change":"2026-03-09T14:00:52.161210+0000","last_active":"2026-03-09T14:01:01.188496+0000","last_peered":"2026-03-09T14:01:01.188496+0000","last_clean":"2026-03-09T14:01:01.188496+0000","last_became_active":"2026-03-09T14:00:52.159259+0000","last_became_peered":"2026-03-09T14:00:52.159259+0000","last_unstale":"2026-03-09T14:01:01.188496+0000","last_undegraded":"2026-03-09T14:01:01.188496+0000","last_fullsized":"2026-03-09T14:01:01.188496+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T01:08:13.829084+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.16","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.021508+0000","last_change":"2026-03-09T14:00:50.120914+0000","last_active":"2026-03-09T14:01:01.021508+0000","last_peered":"2026-03-09T14:01:01.021508+0000","last_clean":"2026-03-09T14:01:01.021508+0000","last_became_active":"2026-03-09T14:00:50.116418+0000","last_became_peered":"2026-03-09T14:00:50.116418+0000","last_unstale":"2026-03-09T14:01:01.021508+0000","last_undegraded":"2026-03-09T14:01:01.021508+0000","last_fullsized":"2026-03-09T14:01:01.021508+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:15:21.013213+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159745+0000","last_change":"2026-03-09T14:00:54.157275+0000","last_active":"2026-03-09T14:00:57.159745+0000","last_peered":"2026-03-09T14:00:57.159745+0000","last_clean":"2026-03-09T14:00:57.159745+0000","last_became_active":"2026-03-09T14:00:54.156730+0000","last_became_peered":"2026-03-09T14:00:54.156730+0000","last_unstale":"2026-03-09T14:00:57.159745+0000","last_undegraded":"2026-03-09T14:00:57.159745+0000","last_fullsized":"2026-03-09T14:00:57.159745+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:43:37.617001+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":15,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188472+0000","last_change":"2026-03-09T14:00:56.637443+0000","last_active":"2026-03-09T14:01:01.188472+0000","last_peered":"2026-03-09T14:01:01.188472+0000","last_clean":"2026-03-09T14:01:01.188472+0000","last_became_active":"2026-03-09T14:00:56.635343+0000","last_became_peered":"2026-03-09T14:00:56.635343+0000","last_unstale":"2026-03-09T14:01:01.188472+0000","last_undegraded":"2026-03-09T14:01:01.188472+0000","last_fullsized":"2026-03-09T14:01:01.188472+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T15:29:49.801540+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.10","version":"54'4","reported_seq":29,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188313+0000","last_change":"2026-03-09T14:00:52.142301+0000","last_active":"2026-03-09T14:01:01.188313+0000","last_peered":"2026-03-09T14:01:01.188313+0000","last_clean":"2026-03-09T14:01:01.188313+0000","last_became_active":"2026-03-09T14:00:52.142117+0000","last_became_peered":"2026-03-09T14:00:52.142117+0000","last_unstale":"2026-03-09T14:01:01.188313+0000","last_undegraded":"2026-03-09T14:01:01.188313+0000","last_fullsized":"2026-03-09T14:01:01.188313+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:52:04.977371+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,6],"acting":[3,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"47'1","reported_seq":31,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.366979+0000","last_change":"2026-03-09T14:00:50.124617+0000","last_active":"2026-03-09T14:00:57.366979+0000","last_peered":"2026-03-09T14:00:57.366979+0000","last_clean":"2026-03-09T14:00:57.366979+0000","last_became_active":"2026-03-09T14:00:50.124352+0000","last_became_peered":"2026-03-09T14:00:50.124352+0000","last_unstale":"2026-03-09T14:00:57.366979+0000","last_undegraded":"2026-03-09T14:00:57.366979+0000","last_fullsized":"2026-03-09T14:00:57.366979+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:24:32.349341+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":7,"num_read_kb":7,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":19,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188850+0000","last_change":"2026-03-09T14:00:54.152193+0000","last_active":"2026-03-09T14:01:01.188850+0000","last_peered":"2026-03-09T14:01:01.188850+0000","last_clean":"2026-03-09T14:01:01.188850+0000","last_became_active":"2026-03-09T14:00:54.151950+0000","last_became_peered":"2026-03-09T14:00:54.151950+0000","last_unstale":"2026-03-09T14:01:01.188850+0000","last_undegraded":"2026-03-09T14:01:01.188850+0000","last_fullsized":"2026-03-09T14:01:01.188850+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:37:25.592969+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"54'1","reported_seq":14,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.159313+0000","last_change":"2026-03-09T14:00:56.160331+0000","last_active":"2026-03-09T14:00:57.159313+0000","last_peered":"2026-03-09T14:00:57.159313+0000","last_clean":"2026-03-09T14:00:57.159313+0000","last_became_active":"2026-03-09T14:00:56.160148+0000","last_became_peered":"2026-03-09T14:00:56.160148+0000","last_unstale":"2026-03-09T14:00:57.159313+0000","last_undegraded":"2026-03-09T14:00:57.159313+0000","last_fullsized":"2026-03-09T14:00:57.159313+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:24:23.174522+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":13,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.126135+0000","last_change":"2026-03-09T14:00:56.130959+0000","last_active":"2026-03-09T14:00:57.126135+0000","last_peered":"2026-03-09T14:00:57.126135+0000","last_clean":"2026-03-09T14:00:57.126135+0000","last_became_active":"2026-03-09T14:00:56.130810+0000","last_became_peered":"2026-03-09T14:00:56.130810+0000","last_unstale":"2026-03-09T14:00:57.126135+0000","last_undegraded":"2026-03-09T14:00:57.126135+0000","last_fullsized":"2026-03-09T14:00:57.126135+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":52,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:55.111764+0000","last_clean_scrub_stamp":"2026-03-09T14:00:55.111764+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T20:11:56.350855+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"0'0","reported_seq":27,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188177+0000","last_change":"2026-03-09T14:00:50.135782+0000","last_active":"2026-03-09T14:01:01.188177+0000","last_peered":"2026-03-09T14:01:01.188177+0000","last_clean":"2026-03-09T14:01:01.188177+0000","last_became_active":"2026-03-09T14:00:50.135504+0000","last_became_peered":"2026-03-09T14:00:50.135504+0000","last_unstale":"2026-03-09T14:01:01.188177+0000","last_undegraded":"2026-03-09T14:01:01.188177+0000","last_fullsized":"2026-03-09T14:01:01.188177+0000","mapping_epoch":46,"log_start":"0'0","ondisk_log_start":"0'0","created":46,"last_epoch_clean":47,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:49.088504+0000","last_clean_scrub_stamp":"2026-03-09T14:00:49.088504+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T22:53:25.156930+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"4.1f","version":"54'11","reported_seq":42,"reported_epoch":55,"state":"active+clean","last_fresh":"2026-03-09T14:01:01.188668+0000","last_change":"2026-03-09T14:00:52.146674+0000","last_active":"2026-03-09T14:01:01.188668+0000","last_peered":"2026-03-09T14:01:01.188668+0000","last_clean":"2026-03-09T14:01:01.188668+0000","last_became_active":"2026-03-09T14:00:52.146552+0000","last_became_peered":"2026-03-09T14:00:52.146552+0000","last_unstale":"2026-03-09T14:01:01.188668+0000","last_undegraded":"2026-03-09T14:01:01.188668+0000","last_fullsized":"2026-03-09T14:01:01.188668+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":48,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:51.094800+0000","last_clean_scrub_stamp":"2026-03-09T14:00:51.094800+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T17:42:55.294332+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,1],"acting":[6,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":17,"reported_epoch":54,"state":"active+clean","last_fresh":"2026-03-09T14:00:57.137865+0000","last_change":"2026-03-09T14:00:54.136626+0000","last_active":"2026-03-09T14:00:57.137865+0000","last_peered":"2026-03-09T14:00:57.137865+0000","last_clean":"2026-03-09T14:00:57.137865+0000","last_became_active":"2026-03-09T14:00:54.135058+0000","last_became_peered":"2026-03-09T14:00:54.135058+0000","last_unstale":"2026-03-09T14:00:57.137865+0000","last_undegraded":"2026-03-09T14:00:57.137865+0000","last_fullsized":"2026-03-09T14:00:57.137865+0000","mapping_epoch":50,"log_start":"0'0","ondisk_log_start":"0'0","created":50,"last_epoch_clean":51,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T14:00:53.101535+0000","last_clean_scrub_stamp":"2026-03-09T14:00:53.101535+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T21:25:37.631482+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":64,"ondisk_log_size":64,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":698,"num_read_kb":455,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":393,"ondisk_log_size":393,"up":96,"acting":96,"num_store_stats":8},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":24,"num_read_kb":24,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":8,"num_read_kb":3,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":2314240,"data_stored":2296400,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":7}],"osd_stats":[{"osd":7,"up_from":43,"seq":184683593733,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27828,"kb_used_data":996,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939596,"statfs":{"total":21470642176,"available":21442146304,"internally_reserved":0,"allocated":1019904,"data_stored":666574,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":38,"seq":163208757255,"num_pgs":43,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27800,"kb_used_data":968,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939624,"statfs":{"total":21470642176,"available":21442174976,"internally_reserved":0,"allocated":991232,"data_stored":665040,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":13,"apply_latency_ms":13,"commit_latency_ns":13000000,"apply_latency_ns":13000000},"alerts":[]},{"osd":5,"up_from":33,"seq":141733920777,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27360,"kb_used_data":524,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940064,"statfs":{"total":21470642176,"available":21442625536,"internally_reserved":0,"allocated":536576,"data_stored":207112,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":28,"seq":120259084299,"num_pgs":58,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27404,"kb_used_data":564,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940020,"statfs":{"total":21470642176,"available":21442580480,"internally_reserved":0,"allocated":577536,"data_stored":212964,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":29,"apply_latency_ms":29,"commit_latency_ns":29000000,"apply_latency_ns":29000000},"alerts":[]},{"osd":3,"up_from":23,"seq":98784247821,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27404,"kb_used_data":568,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940020,"statfs":{"total":21470642176,"available":21442580480,"internally_reserved":0,"allocated":581632,"data_stored":213794,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":14,"apply_latency_ms":14,"commit_latency_ns":14000000,"apply_latency_ns":14000000},"alerts":[]},{"osd":2,"up_from":16,"seq":68719476751,"num_pgs":36,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27364,"kb_used_data":528,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940060,"statfs":{"total":21470642176,"available":21442621440,"internally_reserved":0,"allocated":540672,"data_stored":212071,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":20,"apply_latency_ms":20,"commit_latency_ns":20000000,"apply_latency_ns":20000000},"alerts":[]},{"osd":1,"up_from":12,"seq":51539607569,"num_pgs":57,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27416,"kb_used_data":580,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940008,"statfs":{"total":21470642176,"available":21442568192,"internally_reserved":0,"allocated":593920,"data_stored":207545,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738387,"num_pgs":46,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27832,"kb_used_data":1000,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939592,"statfs":{"total":21470642176,"available":21442142208,"internally_reserved":0,"allocated":1024000,"data_stored":667263,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":408,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":1567,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":1039,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":20480,"data_stored":620,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":993,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":528,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":90112,"data_stored":2338,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":32768,"data_stored":798,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1898,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":53248,"data_stored":1474,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":990,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":36864,"data_stored":1034,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1254,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T14:01:05.182 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T14:01:05.182 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T14:01:05.182 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T14:01:05.182 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph health --format=json 2026-03-09T14:01:05.396 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:01:05.422 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:05 vm03 ceph-mon[52586]: pgmap v109: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 65 KiB/s rd, 5.2 KiB/s wr, 159 op/s 2026-03-09T14:01:05.422 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:05.422 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:05.422 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:05.422 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:05 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T14:01:05.422 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:05 vm03 ceph-mon[58994]: pgmap v109: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 65 KiB/s rd, 5.2 KiB/s wr, 159 op/s 2026-03-09T14:01:05.422 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:05 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:05.422 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:05 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:05.422 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:05 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:05.422 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:05 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T14:01:05.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:05 vm04 ceph-mon[54203]: pgmap v109: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 65 KiB/s rd, 5.2 KiB/s wr, 159 op/s 2026-03-09T14:01:05.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:05 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:05.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:05 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:05.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:05 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' 2026-03-09T14:01:05.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:05 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T14:01:05.642 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:01:05.643 INFO:teuthology.orchestra.run.vm03.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T14:01:05.727 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T14:01:05.727 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T14:01:05.727 INFO:teuthology.run_tasks:Running task workunit... 2026-03-09T14:01:05.731 INFO:tasks.workunit:Pulling workunits from ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T14:01:05.731 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-09T14:01:05.731 DEBUG:teuthology.orchestra.run.vm03:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-09T14:01:05.761 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:01:05.761 INFO:teuthology.orchestra.run.vm03.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-09T14:01:05.761 DEBUG:teuthology.orchestra.run.vm03:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T14:01:05.827 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-09T14:01:05.827 DEBUG:teuthology.orchestra.run.vm03:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-09T14:01:05.886 INFO:tasks.workunit:timeout=1h 2026-03-09T14:01:05.886 INFO:tasks.workunit:cleanup=True 2026-03-09T14:01:05.886 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-09T14:01:05.947 INFO:tasks.workunit.client.0.vm03.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-09T14:01:05.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:05 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: ignoring --setuser ceph since I am not root 2026-03-09T14:01:05.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:05 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: ignoring --setgroup ceph since I am not root 2026-03-09T14:01:05.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:05 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:05.828+0000 7f8b2651f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:01:05.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:05 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:05.869+0000 7f8b2651f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:01:06.043 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:05 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ignoring --setuser ceph since I am not root 2026-03-09T14:01:06.043 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:05 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ignoring --setgroup ceph since I am not root 2026-03-09T14:01:06.043 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:05 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:05.825+0000 7fd0f9348140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T14:01:06.043 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:05 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:05.866+0000 7fd0f9348140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T14:01:06.491 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:06 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:06.289+0000 7f8b2651f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:01:06.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:06 vm04 ceph-mon[54203]: from='client.14592 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:01:06.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:06 vm04 ceph-mon[54203]: from='client.14598 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:01:06.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:06 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2579282433' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:01:06.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:06 vm04 ceph-mon[54203]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T14:01:06.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:06 vm04 ceph-mon[54203]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T14:01:06.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:06 vm03 ceph-mon[58994]: from='client.14592 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:01:06.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:06 vm03 ceph-mon[58994]: from='client.14598 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:01:06.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2579282433' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:01:06.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:06 vm03 ceph-mon[58994]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T14:01:06.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:06 vm03 ceph-mon[58994]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T14:01:06.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:06 vm03 ceph-mon[52586]: from='client.14592 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:01:06.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:06 vm03 ceph-mon[52586]: from='client.14598 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T14:01:06.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:06 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2579282433' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T14:01:06.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:06 vm03 ceph-mon[52586]: from='mgr.14150 192.168.123.103:0/768962416' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T14:01:06.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:06 vm03 ceph-mon[52586]: mgrmap e17: y(active, since 2m), standbys: x 2026-03-09T14:01:06.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:06 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:06.295+0000 7fd0f9348140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T14:01:06.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:06 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:06.620+0000 7f8b2651f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:01:06.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:06 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T14:01:06.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:06 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T14:01:06.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:06 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: from numpy import show_config as show_numpy_config 2026-03-09T14:01:06.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:06 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:06.705+0000 7f8b2651f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:01:06.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:06 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:06.743+0000 7f8b2651f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:01:06.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:06 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:06.819+0000 7f8b2651f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:01:07.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:06 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:06.617+0000 7fd0f9348140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T14:01:07.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:06 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T14:01:07.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:06 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T14:01:07.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:06 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: from numpy import show_config as show_numpy_config 2026-03-09T14:01:07.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:06 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:06.701+0000 7fd0f9348140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T14:01:07.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:06 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:06.738+0000 7fd0f9348140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T14:01:07.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:06 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:06.807+0000 7fd0f9348140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T14:01:07.569 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:07 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:07.306+0000 7fd0f9348140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:01:07.569 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:07 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:07.415+0000 7fd0f9348140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:01:07.569 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:07 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:07.454+0000 7fd0f9348140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:01:07.569 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:07 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:07.489+0000 7fd0f9348140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:01:07.569 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:07 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:07.529+0000 7fd0f9348140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:01:07.621 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:07.350+0000 7f8b2651f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T14:01:07.622 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:07.462+0000 7f8b2651f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:01:07.622 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:07.503+0000 7f8b2651f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T14:01:07.622 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:07.538+0000 7f8b2651f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T14:01:07.622 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:07.580+0000 7f8b2651f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T14:01:07.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:07.617+0000 7f8b2651f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:01:07.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:07.802+0000 7f8b2651f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:01:07.991 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:07.854+0000 7f8b2651f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:01:08.017 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:07 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:07.566+0000 7fd0f9348140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T14:01:08.017 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:07 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:07.735+0000 7fd0f9348140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T14:01:08.017 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:07 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:07.786+0000 7fd0f9348140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T14:01:08.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:08 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:08.014+0000 7fd0f9348140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T14:01:08.403 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:08 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:08.087+0000 7f8b2651f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T14:01:08.603 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:08 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:08.313+0000 7fd0f9348140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:01:08.603 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:08 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:08.352+0000 7fd0f9348140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:01:08.603 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:08 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:08.395+0000 7fd0f9348140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:01:08.603 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:08 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:08.477+0000 7fd0f9348140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:01:08.603 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:08 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:08.516+0000 7fd0f9348140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:01:08.603 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:08 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:08.598+0000 7fd0f9348140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:01:08.695 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:08 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:08.399+0000 7f8b2651f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T14:01:08.695 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:08 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:08.439+0000 7f8b2651f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T14:01:08.695 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:08 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:08.483+0000 7f8b2651f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T14:01:08.695 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:08 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:08.569+0000 7f8b2651f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T14:01:08.695 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:08 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:08.608+0000 7f8b2651f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T14:01:08.860 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:08 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:08.713+0000 7fd0f9348140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:01:08.860 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:08 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:08.856+0000 7fd0f9348140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:01:08.961 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:08 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:08.691+0000 7f8b2651f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T14:01:08.961 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:08 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:08.815+0000 7f8b2651f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:08 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:08.894+0000 7fd0f9348140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:09 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:09] ENGINE Bus STARTING 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:09 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: CherryPy Checker: 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:09 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: The Application mounted at '' has an empty config. 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:09 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[58994]: Active manager daemon y restarted 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[58994]: Activating manager daemon y 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[58994]: osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[58994]: mgrmap e18: y(active, starting, since 0.0342219s), standbys: x 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[52586]: Active manager daemon y restarted 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[52586]: Activating manager daemon y 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[52586]: osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[52586]: mgrmap e18: y(active, starting, since 0.0342219s), standbys: x 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:01:09.117 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:08 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:08 vm04 ceph-mon[54203]: Active manager daemon y restarted 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:08 vm04 ceph-mon[54203]: Activating manager daemon y 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:08 vm04 ceph-mon[54203]: osdmap e56: 8 total, 8 up, 8 in 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:08 vm04 ceph-mon[54203]: mgrmap e18: y(active, starting, since 0.0342219s), standbys: x 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:08 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:08 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:08 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:08 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:08 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:08.972+0000 7f8b2651f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:09 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:09.014+0000 7f8b2651f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:09 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: [09/Mar/2026:14:01:09] ENGINE Bus STARTING 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:09 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: CherryPy Checker: 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:09 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: The Application mounted at '' has an empty config. 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:09 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:09 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: [09/Mar/2026:14:01:09] ENGINE Serving on http://:::9283 2026-03-09T14:01:09.242 INFO:journalctl@ceph.mgr.x.vm04.stdout:Mar 09 14:01:09 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-x[55799]: [09/Mar/2026:14:01:09] ENGINE Bus STARTED 2026-03-09T14:01:09.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:09 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:09] ENGINE Serving on http://:::9283 2026-03-09T14:01:09.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:09 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:09] ENGINE Bus STARTED 2026-03-09T14:01:10.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: Manager daemon y is now available 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: Standby manager daemon x restarted 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: Standby manager daemon x started 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:09 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:01:10.242 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:01:10 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:01:10.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: Manager daemon y is now available 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: Standby manager daemon x restarted 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: Standby manager daemon x started 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: Manager daemon y is now available 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: Standby manager daemon x restarted 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: Standby manager daemon x started 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.? 192.168.123.104:0/3616514928' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:01:10.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:09 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-09T14:01:11.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: mgrmap e19: y(active, since 1.09998s), standbys: x 2026-03-09T14:01:11.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: [09/Mar/2026:14:01:10] ENGINE Bus STARTING 2026-03-09T14:01:11.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: [09/Mar/2026:14:01:10] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T14:01:11.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: [09/Mar/2026:14:01:10] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: [09/Mar/2026:14:01:10] ENGINE Bus STARTED 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: [09/Mar/2026:14:01:10] ENGINE Client ('192.168.123.103', 33600) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: mgrmap e19: y(active, since 1.09998s), standbys: x 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: [09/Mar/2026:14:01:10] ENGINE Bus STARTING 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: [09/Mar/2026:14:01:10] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: [09/Mar/2026:14:01:10] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: [09/Mar/2026:14:01:10] ENGINE Bus STARTED 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: [09/Mar/2026:14:01:10] ENGINE Client ('192.168.123.103', 33600) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: mgrmap e19: y(active, since 1.09998s), standbys: x 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: [09/Mar/2026:14:01:10] ENGINE Bus STARTING 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: [09/Mar/2026:14:01:10] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: [09/Mar/2026:14:01:10] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: [09/Mar/2026:14:01:10] ENGINE Bus STARTED 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: [09/Mar/2026:14:01:10] ENGINE Client ('192.168.123.103', 33600) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:11.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: mgrmap e20: y(active, since 3s), standbys: x 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:12 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: mgrmap e20: y(active, since 3s), standbys: x 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: Updating vm03:/etc/ceph/ceph.conf 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: Updating vm04:/etc/ceph/ceph.conf 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.conf 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: mgrmap e20: y(active, since 3s), standbys: x 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:12.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:12 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:13.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:13 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:01:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:01:13.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:13 vm04 ceph-mon[54203]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.client.admin.keyring 2026-03-09T14:01:13.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:13 vm04 ceph-mon[54203]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:01:13.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:13 vm04 ceph-mon[54203]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.client.admin.keyring 2026-03-09T14:01:13.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:13 vm04 ceph-mon[54203]: Deploying daemon alertmanager.a on vm03 2026-03-09T14:01:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:13 vm03 ceph-mon[52586]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.client.admin.keyring 2026-03-09T14:01:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:13 vm03 ceph-mon[52586]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:01:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:13 vm03 ceph-mon[52586]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.client.admin.keyring 2026-03-09T14:01:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:13 vm03 ceph-mon[52586]: Deploying daemon alertmanager.a on vm03 2026-03-09T14:01:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:13 vm03 ceph-mon[58994]: Updating vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.client.admin.keyring 2026-03-09T14:01:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:13 vm03 ceph-mon[58994]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T14:01:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:13 vm03 ceph-mon[58994]: Updating vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/config/ceph.client.admin.keyring 2026-03-09T14:01:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:13 vm03 ceph-mon[58994]: Deploying daemon alertmanager.a on vm03 2026-03-09T14:01:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:14 vm04 ceph-mon[54203]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:01:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:14 vm04 ceph-mon[54203]: mgrmap e21: y(active, since 4s), standbys: x 2026-03-09T14:01:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:14 vm03 ceph-mon[52586]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:01:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:14 vm03 ceph-mon[52586]: mgrmap e21: y(active, since 4s), standbys: x 2026-03-09T14:01:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:14 vm03 ceph-mon[58994]: pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:01:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:14 vm03 ceph-mon[58994]: mgrmap e21: y(active, since 4s), standbys: x 2026-03-09T14:01:16.542 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 systemd[1]: Starting Ceph alertmanager.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:01:16.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:16 vm04 ceph-mon[54203]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:01:17.001 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:16 vm03 ceph-mon[52586]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:01:17.002 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:16 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:16] ENGINE Bus STOPPING 2026-03-09T14:01:17.002 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:16 vm03 ceph-mon[58994]: pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 podman[88017]: 2026-03-09 14:01:16.61518879 +0000 UTC m=+0.024442513 volume create 6da366211e10f383d581e367f7c4d25013b48587f64e757ae216fb95796a7cda 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 podman[88017]: 2026-03-09 14:01:16.618754885 +0000 UTC m=+0.028008607 container create d37657d3b04e221dc81ddebf4c9d419334d3bc63975ba71180e16c7c36e4ef5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 podman[88017]: 2026-03-09 14:01:16.658294695 +0000 UTC m=+0.067548428 container init d37657d3b04e221dc81ddebf4c9d419334d3bc63975ba71180e16c7c36e4ef5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 podman[88017]: 2026-03-09 14:01:16.660817327 +0000 UTC m=+0.070071060 container start d37657d3b04e221dc81ddebf4c9d419334d3bc63975ba71180e16c7c36e4ef5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 bash[88017]: d37657d3b04e221dc81ddebf4c9d419334d3bc63975ba71180e16c7c36e4ef5d 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 podman[88017]: 2026-03-09 14:01:16.607743684 +0000 UTC m=+0.016997428 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 systemd[1]: Started Ceph alertmanager.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88029]: ts=2026-03-09T14:01:16.720Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88029]: ts=2026-03-09T14:01:16.725Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88029]: ts=2026-03-09T14:01:16.726Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.103 port=9094 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88029]: ts=2026-03-09T14:01:16.729Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88029]: ts=2026-03-09T14:01:16.769Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88029]: ts=2026-03-09T14:01:16.769Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88029]: ts=2026-03-09T14:01:16.771Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T14:01:17.002 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:16 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88029]: ts=2026-03-09T14:01:16.771Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T14:01:17.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:17 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:17] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:01:17.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:17 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:17] ENGINE Bus STOPPED 2026-03-09T14:01:17.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:17 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:17] ENGINE Bus STARTING 2026-03-09T14:01:17.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:17 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:17] ENGINE Serving on http://:::9283 2026-03-09T14:01:17.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:17 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:17] ENGINE Bus STARTED 2026-03-09T14:01:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:17 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:17 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:17 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:17 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:17 vm04 ceph-mon[54203]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T14:01:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:17 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:17 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:17 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:01:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:17 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[52586]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[58994]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T14:01:18.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:18.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:01:18.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:17 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:19.013 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:19 vm03 ceph-mon[52586]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:01:19.013 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:19 vm03 ceph-mon[52586]: Deploying daemon grafana.a on vm04 2026-03-09T14:01:19.013 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:19 vm03 ceph-mon[52586]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:01:19.013 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:19 vm03 ceph-mon[58994]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:01:19.013 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:19 vm03 ceph-mon[58994]: Deploying daemon grafana.a on vm04 2026-03-09T14:01:19.013 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:19 vm03 ceph-mon[58994]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:01:19.013 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:18 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88029]: ts=2026-03-09T14:01:18.733Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.001018853s 2026-03-09T14:01:19.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:19 vm04 ceph-mon[54203]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T14:01:19.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:19 vm04 ceph-mon[54203]: Deploying daemon grafana.a on vm04 2026-03-09T14:01:19.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:19 vm04 ceph-mon[54203]: pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-09T14:01:20.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:20 vm04 ceph-mon[54203]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:01:20.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:20 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:20.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:01:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:01:20.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:20 vm03 ceph-mon[52586]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:01:20.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:20 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:20.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:20 vm03 ceph-mon[58994]: pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T14:01:20.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:20 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:21.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:21 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:21 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:21.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:21 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:22 vm04 ceph-mon[54203]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:01:22.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:22 vm03 ceph-mon[52586]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:01:22.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:22 vm03 ceph-mon[58994]: pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:01:23.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:23 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:01:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:01:24.116 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 systemd[1]: Starting Ceph grafana.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 podman[80070]: 2026-03-09 14:01:24.113999017 +0000 UTC m=+0.017707165 container create 539aaf32dae0d4a6d217e6660dd94e9ba04765643ddfdea0f0edc9e9e7eeeaa6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a, maintainer=Grafana Labs ) 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 podman[80070]: 2026-03-09 14:01:24.158976544 +0000 UTC m=+0.062684712 container init 539aaf32dae0d4a6d217e6660dd94e9ba04765643ddfdea0f0edc9e9e7eeeaa6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a, maintainer=Grafana Labs ) 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 podman[80070]: 2026-03-09 14:01:24.162421843 +0000 UTC m=+0.066130000 container start 539aaf32dae0d4a6d217e6660dd94e9ba04765643ddfdea0f0edc9e9e7eeeaa6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a, maintainer=Grafana Labs ) 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 bash[80070]: 539aaf32dae0d4a6d217e6660dd94e9ba04765643ddfdea0f0edc9e9e7eeeaa6 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 podman[80070]: 2026-03-09 14:01:24.106193396 +0000 UTC m=+0.009901553 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 systemd[1]: Started Ceph grafana.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257274344Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-09T14:01:24Z 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257425006Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257429174Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257431338Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257432991Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257434644Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257436257Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.25743778Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257439663Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257441567Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.25744316Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257444704Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257446277Z level=info msg=Target target=[all] 2026-03-09T14:01:24.370 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257449643Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257451306Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257452789Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257454282Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257455825Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=settings t=2026-03-09T14:01:24.257457377Z level=info msg="App mode production" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=sqlstore t=2026-03-09T14:01:24.257585748Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=sqlstore t=2026-03-09T14:01:24.257593022Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.264022295Z level=info msg="Starting DB migrations" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.264801162Z level=info msg="Executing migration" id="create migration_log table" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.265388212Z level=info msg="Migration successfully executed" id="create migration_log table" duration=586.87µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.266613966Z level=info msg="Executing migration" id="create user table" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.266958601Z level=info msg="Migration successfully executed" id="create user table" duration=344.976µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.26763131Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.267962811Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=331.361µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.268910515Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.269341Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=430.706µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.270014Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.270403749Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=390.039µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.270920577Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.271262437Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=341.969µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.271834889Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.272836373Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.001304ms 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.273507338Z level=info msg="Executing migration" id="create user table v2" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.273887851Z level=info msg="Migration successfully executed" id="create user table v2" duration=380.453µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.274417613Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.274774361Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=356.709µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.275287381Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.27567173Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=384.179µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.276311497Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.27655318Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=241.604µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.277059778Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.277384176Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=324.338µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.277885453Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.278412992Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=528.781µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.278907898Z level=info msg="Executing migration" id="Update user table charset" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.278978299Z level=info msg="Migration successfully executed" id="Update user table charset" duration=70.822µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.279531414Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.280021131Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=493.584µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.280564167Z level=info msg="Executing migration" id="Add missing user data" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.280740227Z level=info msg="Migration successfully executed" id="Add missing user data" duration=175.949µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.281402817Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.281894147Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=491.38µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.282529085Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.282942759Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=413.805µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.283456712Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.283951678Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=496.119µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.284503561Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.287478219Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=2.973796ms 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.288062783Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.288706628Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=643.805µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.28928967Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.289517656Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=228.117µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.290131866Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.290608478Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=475.92µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.29138415Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.29179546Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=411.4µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.292866766Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.293261514Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=391.263µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.293947197Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.294328381Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=379.039µs 2026-03-09T14:01:24.371 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.294962328Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.295323183Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=360.976µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.29597857Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.296365563Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=374.36µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.29723431Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.297270036Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=36.459µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.297840384Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.29820682Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=366.466µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.298750659Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.299117354Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=366.706µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.299634863Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.300000598Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=365.734µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.300495895Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.300869314Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=373.619µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.301393025Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.302578604Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.18559ms 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.303042431Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.303478548Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=434.865µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.303988892Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.304385696Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=396.473µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.304921068Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.305291061Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=369.853µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.30579324Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.306194942Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=401.673µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.306667065Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.307052678Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=384.54µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.307674001Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.307895296Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=221.054µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.308387185Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.308682358Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=295.032µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.309193185Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.309440447Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=247.182µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.309967314Z level=info msg="Executing migration" id="create star table" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.310290608Z level=info msg="Migration successfully executed" id="create star table" duration=323.144µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.310775066Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.311173962Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=398.806µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.311841291Z level=info msg="Executing migration" id="create org table v1" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.312247962Z level=info msg="Migration successfully executed" id="create org table v1" duration=404.958µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.312919439Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.313381784Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=463.097µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.314029266Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.314436238Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=405.519µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.315061979Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.315551867Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=489.948µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.316190141Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.316642208Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=451.856µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.317274179Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.317707041Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=431.699µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.318259275Z level=info msg="Executing migration" id="Update org table charset" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.318293358Z level=info msg="Migration successfully executed" id="Update org table charset" duration=34.164µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.318931963Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.31896777Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=35.698µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.31939507Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.319518752Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=124.042µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.320051298Z level=info msg="Executing migration" id="create dashboard table" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.320472567Z level=info msg="Migration successfully executed" id="create dashboard table" duration=420.878µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.321098508Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.321567296Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=468.688µs 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.322522474Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-09T14:01:24.372 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.322957938Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=435.504µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.323629025Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.323973861Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=344.915µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.324624488Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.32504216Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=417.652µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.325973603Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.326405311Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=430.576µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.326912321Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.329111336Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=2.199225ms 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.329812048Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.330494515Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=681.936µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.331055395Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.33152246Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=467.876µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.332286219Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.332856166Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=568.304µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.333527653Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.333780135Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=252.101µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.33453078Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.335078075Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=548.557µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.335677417Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.335750704Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=74.23µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.336450003Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.337242487Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=792.583µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.33774154Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.338461789Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=720.378µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.339020734Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.339765469Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=744.694µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.340243654Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.340762956Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=519.232µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.341280614Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.342066205Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=785.44µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.342548257Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.342974115Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=424.335µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.343456077Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.343817112Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=361.246µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.344426923Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.344457841Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=31.049µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.344960823Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.344992362Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=31.93µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.34539745Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.346083094Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=685.684µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.346528517Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.347193252Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=662.45µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.347636951Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.348339146Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=702.085µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.348813595Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.349488267Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=674.682µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.3499546Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.350087349Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=132.898µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.350625416Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.351024383Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=399.287µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.351561438Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.351938203Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=376.885µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.352447367Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.352478024Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=32.681µs 2026-03-09T14:01:24.373 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.352985323Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.353423354Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=438.28µs 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.353894785Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.354280226Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=386.122µs 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.354914674Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.356801075Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=1.886161ms 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.357299609Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.357696381Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=396.792µs 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.358264244Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.358706511Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=442.138µs 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.359265217Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.35969411Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=428.752µs 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.360281199Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.360503595Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=222.646µs 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.360946904Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.361307851Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=360.706µs 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.361792187Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.362518457Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=725.899µs 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.362951538Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.363304959Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=353.411µs 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.36378165Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.36389294Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=110.628µs 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.364440695Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.364548766Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=108.103µs 2026-03-09T14:01:24.374 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.36507364Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-09T14:01:24.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:24 vm04 ceph-mon[54203]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:01:24.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:24 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:24.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:24 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:24 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:24 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:24 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.36544764Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=374.049µs 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.371080893Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.371868157Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=787.544µs 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.372349589Z level=info msg="Executing migration" id="create data_source table" 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.372822965Z level=info msg="Migration successfully executed" id="create data_source table" duration=473.426µs 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.373398482Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.373858663Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=460.051µs 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.377488638Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.377909966Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=421.658µs 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.397037097Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.397829721Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=792.724µs 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.398489366Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.398923007Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=433.871µs 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.39947437Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-09T14:01:24.621 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.401427194Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=1.952654ms 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.402029773Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.402498099Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=468.115µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.403029124Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.403473575Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=444.432µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.403969003Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.404438891Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=469.879µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.405110669Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.40544234Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=331.66µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.405947185Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.406834166Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=886.64µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.407386268Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.408216923Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=830.354µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.40878116Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.408801288Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=20.379µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.409440864Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.40958786Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=147.046µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.410109797Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.411010482Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=900.566µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.411585619Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.411732283Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=146.094µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.412273947Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.412420642Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=146.786µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.412972455Z level=info msg="Executing migration" id="Add uid column" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.413867219Z level=info msg="Migration successfully executed" id="Add uid column" duration=893.071µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.414601284Z level=info msg="Executing migration" id="Update uid value" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.41474876Z level=info msg="Migration successfully executed" id="Update uid value" duration=148.157µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.415322935Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.415746167Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=423.102µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.416235804Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.416639007Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=403.234µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.41734014Z level=info msg="Executing migration" id="create api_key table" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.41777316Z level=info msg="Migration successfully executed" id="create api_key table" duration=432.71µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.418673616Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.419082481Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=408.535µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.419689256Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.420084988Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=395.681µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.420700179Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.421135083Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=434.774µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.421799156Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.422189668Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=389.069µs 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.422723395Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-09T14:01:24.622 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.423121672Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=398.506µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.423601991Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.424000086Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=398.056µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.424516893Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.426566669Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.049707ms 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.427093275Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.427482183Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=388.757µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.427986478Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.428408688Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=422.199µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.428903374Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.429300386Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=396.943µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.429830429Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.430223865Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=393.436µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.430824069Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.431048679Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=224.329µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.431482631Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.431792963Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=310.241µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.432276567Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.432285624Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=9.669µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.432880688Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.433760214Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=879.516µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.434244761Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.435129517Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=884.877µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.435678845Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.435811424Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=132.408µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.436308875Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.43722471Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=915.985µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.437761815Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.438668882Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=907.529µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.439169519Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.43956542Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=395.69µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.44009921Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.440438264Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=338.693µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.440939993Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.44134961Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=409.336µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.441895412Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.442307924Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=413.143µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.442929477Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.443371624Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=442.037µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.443923217Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.444322144Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=398.856µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.444904344Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.444978352Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=74.269µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.445492354Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.445503705Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=11.702µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.446027867Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.446988895Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=961.128µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.447495663Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.448443688Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=947.874µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.449035506Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.449110346Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=75.23µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.449646149Z level=info msg="Executing migration" id="create quota table v1" 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.450017194Z level=info msg="Migration successfully executed" id="create quota table v1" duration=370.863µs 2026-03-09T14:01:24.623 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.45056529Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.450967993Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=402.573µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.4515269Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.451538331Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=11.932µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.452069435Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.452459165Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=389.529µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.453102629Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.453518728Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=415.969µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.454100586Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.455103033Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=1.002147ms 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.455559788Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.455569396Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=9.729µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.456105719Z level=info msg="Executing migration" id="create session table" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.456538961Z level=info msg="Migration successfully executed" id="create session table" duration=433.102µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.457538973Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.457624373Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=85.48µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.458151149Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.458234705Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=83.566µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.458762182Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.459149848Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=387.496µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.459749299Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.460116878Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=366.827µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.460739533Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.460751094Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=12.414µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.461294171Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.461304851Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=10.791µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.461903623Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.462963346Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.059332ms 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.46347874Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.4645154Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.037782ms 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.464997594Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.465088965Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=91.682µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.465607215Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.465691442Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=84.388µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.46618197Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.466595525Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=413.204µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.467192081Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.467201548Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=9.578µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.467788447Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.468875051Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.085632ms 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.469444187Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.469561396Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=117.289µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.470070879Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.471152514Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=1.081704ms 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.471661106Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.4727263Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.065144ms 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.473240783Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.473294133Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=53.26µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.473928419Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.474475915Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=547.084µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.475042655Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.475642408Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=599.562µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.476210582Z level=info msg="Executing migration" id="create alert table v1" 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.476741164Z level=info msg="Migration successfully executed" id="create alert table v1" duration=531.074µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.477405969Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.477870257Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=463.427µs 2026-03-09T14:01:24.624 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.478481412Z level=info msg="Executing migration" id="add index alert state" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.478948126Z level=info msg="Migration successfully executed" id="add index alert state" duration=466.704µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.47953309Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.479999904Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=466.784µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.480630555Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.481005426Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=374.512µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.481599137Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.482068246Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=469.249µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.482633163Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.48344338Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=806.719µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.484100931Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.487129599Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=3.028569ms 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.487708092Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.488167451Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=459.639µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.488844029Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.489276177Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=432.309µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.489968313Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.490160713Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=192.34µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.490662653Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.490968114Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=305.522µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.491440178Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.491883347Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=440.695µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.492417377Z level=info msg="Executing migration" id="Add column is_default" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.49361062Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.193234ms 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.494136575Z level=info msg="Executing migration" id="Add column frequency" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.495682689Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.545853ms 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.496208213Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.49755335Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.344746ms 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.498081008Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.499328052Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.247364ms 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.499955416Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.500421669Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=466.352µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.501134453Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.501168277Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=36.308µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.50185901Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.501892262Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=33.654µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.502472738Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.50293284Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=458.208µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.503869603Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.504351815Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=482.262µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.505015017Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.505484014Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=469.067µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.506065082Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.506535984Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=470.752µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.507086664Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.507584616Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=497.832µs 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.508055497Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.509287013Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=1.233248ms 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.509862791Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.511036287Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.173636ms 2026-03-09T14:01:24.625 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.511515454Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.511623607Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=108.423µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.512139142Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.512550842Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=411.891µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.513153531Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.513561334Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=407.443µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.514044057Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.515384937Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=1.340389ms 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.51590002Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.515953681Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=54.232µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.516447666Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.516879805Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=432.138µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.517343894Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.517828221Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=484.497µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.518510346Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.51857608Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=65.753µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.519144244Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.519566523Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=422.379µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.520186695Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.520603094Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=416.389µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.521181877Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.52157368Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=391.824µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.522086871Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.522479576Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=392.354µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.523059882Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.523494806Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=433.441µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.524077527Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.52454323Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=465.532µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.525151407Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.525187054Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=35.977µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.525722105Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.527024874Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=1.302657ms 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.527635567Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.528015909Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=380.572µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.528437889Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.529644928Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.206027ms 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.530126259Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.530464432Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=338.245µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.530913152Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.531294536Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=381.364µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.531872588Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.532253451Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=379.52µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.53270741Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.535999974Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=3.292022ms 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.536477557Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.536830228Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=352.32µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.537292643Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.537710164Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=417.19µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.538296061Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.538485726Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=189.335µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.538933695Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.539214139Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=280.304µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.539714196Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.539813893Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=100.237µs 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.540305733Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.541592081Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=1.286288ms 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.542091365Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.543282224Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=1.191681ms 2026-03-09T14:01:24.626 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.543740462Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.54412471Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=384.119µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.544551931Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.544917325Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=365.334µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.545496569Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.545616203Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=119.744µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.5460798Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.547294705Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=1.214875ms 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.54783213Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.548304695Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=472.394µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.54914643Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.549251808Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=105.619µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.549781358Z level=info msg="Executing migration" id="Move region to single row" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.549971525Z level=info msg="Migration successfully executed" id="Move region to single row" duration=190.336µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.550515793Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.55102165Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=505.807µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.551502851Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.551928818Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=425.967µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.552409428Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.552813163Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=403.715µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.553269989Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.553654178Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=384.279µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.554088231Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.554476697Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=388.526µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.554925097Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.555303535Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=378.379µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.555756232Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.555805264Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=48.611µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.556439792Z level=info msg="Executing migration" id="create test_data table" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.556838929Z level=info msg="Migration successfully executed" id="create test_data table" duration=399.167µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.557408986Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.557751969Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=342.08µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.558252414Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.558642615Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=389.238µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.559137731Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.559533823Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=395.77µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.560073623Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.560184571Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=111.328µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.560735573Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.560930397Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=194.855µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.561401258Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.561447094Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=46.387µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.561939476Z level=info msg="Executing migration" id="create team table" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.562289571Z level=info msg="Migration successfully executed" id="create team table" duration=350.135µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.562840693Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.563283441Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=442.938µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.563863648Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.564252435Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=388.828µs 2026-03-09T14:01:24.627 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.564822302Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.566127875Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=1.305684ms 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.566563421Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.566661184Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=97.984µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.567121715Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.567513449Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=391.573µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.568034835Z level=info msg="Executing migration" id="create team member table" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.568403535Z level=info msg="Migration successfully executed" id="create team member table" duration=368.45µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.568929599Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.569294513Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=364.904µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.569839473Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.570206169Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=366.597µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.570765666Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.571121322Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=355.695µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.571651494Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.573135403Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=1.484007ms 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.57363136Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.57513196Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=1.500689ms 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.575662312Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.577081278Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=1.418856ms 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.577571294Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.578064859Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=493.625µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.578700509Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.579161852Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=461.383µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.579989742Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.580557554Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=567.682µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.581171083Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.581645802Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=474.719µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.582218713Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.582631798Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=412.783µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.583142192Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.58353608Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=393.968µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.584075159Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.584480879Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=406.561µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.585001041Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.585483636Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=482.383µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.586038062Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.586292149Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=254.427µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.586799117Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.58691828Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=118.871µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.587419238Z level=info msg="Executing migration" id="create tag table" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.587773411Z level=info msg="Migration successfully executed" id="create tag table" duration=354.072µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.588349359Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.588744408Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=395.36µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.589297274Z level=info msg="Executing migration" id="create login attempt table" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.589633112Z level=info msg="Migration successfully executed" id="create login attempt table" duration=334.957µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.590132577Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.590519902Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=387.224µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.591055895Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.591454841Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=398.555µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.591952303Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.595831024Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=3.87848ms 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.596327733Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.596673179Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=345.666µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.597122911Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.597516237Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=393.386µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.598039967Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.59821243Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=172.963µs 2026-03-09T14:01:24.628 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.598647425Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.598931186Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=283.721µs 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.599395855Z level=info msg="Executing migration" id="create user auth table" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.599730933Z level=info msg="Migration successfully executed" id="create user auth table" duration=335.178µs 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.600166839Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.600568471Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=401.512µs 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.601128138Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.601175767Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=48.291µs 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.601745774Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.603294242Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=1.548628ms 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.603785061Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.605346114Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.560882ms 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.60585757Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.607389258Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=1.531607ms 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.607895414Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.609473258Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=1.577553ms 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.609973343Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.610447832Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=474.54µs 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.611030113Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.612780007Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=1.749894ms 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.613285644Z level=info msg="Executing migration" id="create server_lock table" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.613697144Z level=info msg="Migration successfully executed" id="create server_lock table" duration=411.62µs 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[52586]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.61455538Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.615020642Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=465.051µs 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.61560777Z level=info msg="Executing migration" id="create user auth token table" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.616063654Z level=info msg="Migration successfully executed" id="create user auth token table" duration=455.965µs 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.616686429Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.617193849Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=507.41µs 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.617760279Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.618249334Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=489.034µs 2026-03-09T14:01:24.629 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.618832546Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[58994]: pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.629 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:24 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.619275956Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=444.201µs 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.627098628Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.629154677Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=2.057312ms 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.629734191Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.630218148Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=483.757µs 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.630847004Z level=info msg="Executing migration" id="create cache_data table" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.631275817Z level=info msg="Migration successfully executed" id="create cache_data table" duration=428.823µs 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.631865552Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.632380455Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=515.003µs 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.633080065Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.633575772Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=495.697µs 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.634208887Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.634739601Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=530.554µs 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.635385709Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.635481599Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=97.082µs 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.636041127Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.636145752Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=104.727µs 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.636697817Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.637160302Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=462.705µs 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.637789549Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.638555514Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=765.392µs 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.639529947Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.640134839Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=606.284µs 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.641077053Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-09T14:01:24.878 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.64110789Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=31.098µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.64167892Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.642197069Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=518.259µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.642672159Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.643108605Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=436.526µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.643595897Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.644066418Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=470.68µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.644578977Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.645099662Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=518.69µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.645589319Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.647440433Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=1.851266ms 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.648051217Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.648537807Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=485.599µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.649061819Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.649161966Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=100.148µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.649696546Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.65011513Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=418.434µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.650621066Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.651108239Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=485.97µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.65160578Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.652090517Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=484.758µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.652638102Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.652725916Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=87.594µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.653334846Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.653788275Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=453.149µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.654345889Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.654814006Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=467.997µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.655309072Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.655783741Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=475.551µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.656258298Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.656716487Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=457.798µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.657305157Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.659346829Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=2.041381ms 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.659849099Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.660311265Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=462.116µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.660851936Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.661333036Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=480.83µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.661871785Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.671376716Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=9.49963ms 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.672208283Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.680879463Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=8.667694ms 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.681622835Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.682112581Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=490.157µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.682598331Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.683006124Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=408.033µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.683664156Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.68541839Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=1.754333ms 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.685865315Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.687530703Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=1.665258ms 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.688010591Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.688449412Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=438.801µs 2026-03-09T14:01:24.879 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.689065806Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.689526599Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=459.339µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.690104511Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.690576384Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=471.692µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.691115333Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.691597315Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=481.741µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.692143707Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.692213237Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=69.842µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.692713234Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.694553759Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=1.841547ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.695039488Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.69672879Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=1.687569ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.697246829Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.698988309Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=1.74125ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.699513222Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.699925172Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=412.341µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.700430729Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.700910086Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=477.995µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.701389163Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.703142245Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=1.752931ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.703613246Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.705293862Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=1.680435ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.705833431Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.70626556Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=432.089µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.706862878Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.708709795Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=1.846476ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.709247753Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.711005352Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=1.756668ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.711508504Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.711581871Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=72.044µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.712107265Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.71261751Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=510.135µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.713266695Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.713740302Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=474.489µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.714330526Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.714824992Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=495.538µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.71544957Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.715519081Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=70.042µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.716010711Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.717889828Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=1.878716ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.718350651Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.720137885Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=1.787305ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.720632222Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.722418014Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=1.785643ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.722861103Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.724615778Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=1.754524ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.725088192Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.726878873Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=1.790561ms 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.727348613Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.727429885Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=81.522µs 2026-03-09T14:01:24.880 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.727921005Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.728277211Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=356.257µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.728878337Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.730946297Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=2.06767ms 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.73143998Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.731513578Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=73.418µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.732000029Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.733839332Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=1.839192ms 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.734385364Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.734834495Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=449.402µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.735432614Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.737283229Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=1.850744ms 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.737790327Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.738143218Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=353.442µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.738742901Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.7391544Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=411.349µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.739968274Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.741870013Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=1.900587ms 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.742385168Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.742752094Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=366.926µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.743335757Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.743772724Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=436.858µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.74429901Z level=info msg="Executing migration" id="create alert_image table" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.744679822Z level=info msg="Migration successfully executed" id="create alert_image table" duration=380.773µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.745272583Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.74570983Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=437.427µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.746247016Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.746318741Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=71.894µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.746861006Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.747326056Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=464.91µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.747929656Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.748390247Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=460.371µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.748879274Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.749072745Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.749569575Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.749843268Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=273.663µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.750297929Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.750750145Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=451.235µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.751193394Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.753161509Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=1.968134ms 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.753634314Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.754114873Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=480.589µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.75473796Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.755209943Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=473.295µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.755746868Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.756139131Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=392.123µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.756690023Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.757163599Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=472.443µs 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.757743274Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-09T14:01:24.881 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.75818452Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=441.065µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.758753265Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.758784373Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=31.799µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.759289078Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.759347347Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=58.3µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.759832074Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.759993947Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=161.903µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.760396651Z level=info msg="Executing migration" id="create data_keys table" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.760822738Z level=info msg="Migration successfully executed" id="create data_keys table" duration=425.846µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.761394088Z level=info msg="Executing migration" id="create secrets table" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.761755114Z level=info msg="Migration successfully executed" id="create secrets table" duration=361.035µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.762576612Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.77250747Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=9.928174ms 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.773117722Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.775277554Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.159542ms 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.775807707Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.775908385Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=100.999µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.776394125Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.785870332Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=9.475687ms 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.786438785Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.795958474Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=9.519297ms 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.796546315Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.796963175Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=419.585µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.797572004Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.798055589Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=483.474µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.798677654Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.798812606Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=135.103µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.799413301Z level=info msg="Executing migration" id="create permission table" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.799831052Z level=info msg="Migration successfully executed" id="create permission table" duration=416.198µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.800478675Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.800930121Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=451.416µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.801540483Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.802005163Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=465.703µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.802583324Z level=info msg="Executing migration" id="create role table" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.802969718Z level=info msg="Migration successfully executed" id="create role table" duration=386.514µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.803539714Z level=info msg="Executing migration" id="add column display_name" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.805789125Z level=info msg="Migration successfully executed" id="add column display_name" duration=2.249231ms 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.806232775Z level=info msg="Executing migration" id="add column group_name" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.808510919Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.277763ms 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.809034109Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.809523885Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=489.997µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.810083863Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.810572859Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=487.622µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.811143397Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.811693356Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=549.88µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.812233798Z level=info msg="Executing migration" id="create team role table" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.812647021Z level=info msg="Migration successfully executed" id="create team role table" duration=413.214µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.813197201Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.813692338Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=494.997µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.814267053Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.814769184Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=502.021µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.815386289Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.81583015Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=445.123µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.81639673Z level=info msg="Executing migration" id="create user role table" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.816787431Z level=info msg="Migration successfully executed" id="create user role table" duration=390.58µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.817642903Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.81809019Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=447.308µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.81859281Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.819068371Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=476.613µs 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.819611447Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-09T14:01:24.882 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.820053195Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=441.678µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.820586452Z level=info msg="Executing migration" id="create builtin role table" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.820961976Z level=info msg="Migration successfully executed" id="create builtin role table" duration=375.804µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.821513608Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.821996602Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=482.895µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.822561289Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.823100187Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=539µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.823691203Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.826067452Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.376128ms 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.826528214Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.826984668Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=456.334µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.827525269Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.827973328Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=448.12µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.828547614Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.829017302Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=467.795µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.829463759Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.829911536Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=447.778µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.830401113Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.830761899Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=360.835µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.831327607Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.831788369Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=460.791µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.832322529Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.834646599Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.32391ms 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.835140043Z level=info msg="Executing migration" id="permission kind migration" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.837511131Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.370117ms 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.838010856Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.840219731Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.208846ms 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.840693588Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.842965971Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.271803ms 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.843450077Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.843899629Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=449.442µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.844434009Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.844908498Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=474.107µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.845461562Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.845895515Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=433.772µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.846317314Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.846703818Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=386.484µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.847189286Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.847621105Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=431.68µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.848146307Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.848201791Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=54.712µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.849081017Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.849129017Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=47.939µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.849752494Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.849980581Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=228.307µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.850448748Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.850675742Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=227.105µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.851127147Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.851428101Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=300.724µs 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.852091351Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-09T14:01:24.883 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.852219611Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=127.419µs 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.852742931Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.853028786Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=286.115µs 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.853539393Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.853991668Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=451.224µs 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.854534605Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.855086738Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=552.013µs 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.855960093Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.858483567Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.522101ms 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.859002058Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.859056669Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=55.244µs 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.859604114Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.860061962Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=457.737µs 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.86064896Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.861165236Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=516.287µs 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.86175501Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.862175527Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=420.648µs 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.862763258Z level=info msg="Executing migration" id="add correlation config column" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.865224754Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.461126ms 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.865698993Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.866114419Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=415.306µs 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.866592606Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.866993806Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=401.291µs 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.867481839Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.87448613Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=7.002758ms 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.875103536Z level=info msg="Executing migration" id="create correlation v2" 2026-03-09T14:01:24.884 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.875657883Z level=info msg="Migration successfully executed" id="create correlation v2" duration=554.367µs 2026-03-09T14:01:24.904 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:24 vm03 systemd[1]: Starting Ceph node-exporter.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.876171865Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.882000245Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=5.82875ms 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.882828014Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.883410945Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=582.839µs 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.884045373Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.8844984Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=451.425µs 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.885220121Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.885344083Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=124.082µs 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.885848428Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.88622912Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=380.552µs 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.956623441Z level=info msg="Executing migration" id="add provisioning column" 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.959396472Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.775034ms 2026-03-09T14:01:25.134 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.973599597Z level=info msg="Executing migration" id="create entity_events table" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.974188289Z level=info msg="Migration successfully executed" id="create entity_events table" duration=590.074µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.986345333Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.986900683Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=557.403µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.996256886Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.996544945Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.997493209Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.997675Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.998170127Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.998611272Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=441.126µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.999087665Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:24 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:24.999642002Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=554.377µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.000236956Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.000697608Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=468.427µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.001252686Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.001691699Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=438.922µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.002198657Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.002595339Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=396.631µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.002978668Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.003427788Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=449.241µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.003878732Z level=info msg="Executing migration" id="Drop public config table" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.004212998Z level=info msg="Migration successfully executed" id="Drop public config table" duration=334.285µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.004697405Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.005192772Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=495.176µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.005673913Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.006075644Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=401.781µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.006491804Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.006908243Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=416.118µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.007319342Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.007738677Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=419.454µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.008236458Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.016682177Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=8.444867ms 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.017197582Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.019633562Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.435839ms 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.020776521Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.023674675Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.897062ms 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.024144906Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.024252997Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=108.183µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.024771318Z level=info msg="Executing migration" id="add share column" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.0271553Z level=info msg="Migration successfully executed" id="add share column" duration=2.383811ms 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.027573624Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.027661688Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=88.034µs 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.028097112Z level=info msg="Executing migration" id="create file table" 2026-03-09T14:01:25.135 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.028476414Z level=info msg="Migration successfully executed" id="create file table" duration=379.3µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.029050327Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.029476766Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=425.026µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.030545426Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.030945825Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=400.278µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.03257253Z level=info msg="Executing migration" id="create file_meta table" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.032900223Z level=info msg="Migration successfully executed" id="create file_meta table" duration=328.535µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.033434223Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.03382309Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=388.638µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.034344507Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.034379091Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=34.896µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.034833241Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.034857017Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=24.037µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.035198064Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.035437432Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=239.297µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.035872357Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.035962175Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=89.779µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.036289026Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.036868782Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=580.106µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.037323623Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.039754423Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=2.43048ms 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.04015365Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.040221607Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=69.41µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.040704642Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.041156748Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=451.816µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.041662134Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.041806965Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=146.074µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.042227381Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.04231223Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=85.128µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.0427822Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.042965443Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=183.394µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.043380971Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.0457743Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.39339ms 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.046228881Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.04894767Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=2.718719ms 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.049411328Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.049841864Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=430.475µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.050268704Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.075251094Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=24.98178ms 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.075739459Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.076276854Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=537.216µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.076818759Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.077302284Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=483.315µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.077857172Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.08614827Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=8.290858ms 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.086824537Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.089287217Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.46259ms 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.089799696Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.089943004Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=142.536µs 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.090471343Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-09T14:01:25.136 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.09056085Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=89.538µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.091085012Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.091183716Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=99.726µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.09174671Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.091857938Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=111.348µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.092405353Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.092507915Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=101.6µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.093109862Z level=info msg="Executing migration" id="create folder table" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.093544354Z level=info msg="Migration successfully executed" id="create folder table" duration=434.492µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.094025295Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.094580164Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=554.847µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.095152825Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.095596075Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=442.989µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.09612751Z level=info msg="Executing migration" id="Update folder title length" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.096137729Z level=info msg="Migration successfully executed" id="Update folder title length" duration=10.56µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.096603611Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.097055086Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=451.185µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.097583495Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.097980829Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=397.354µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.098395645Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.098856248Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=460.362µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.099370049Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.099578921Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=208.812µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.099990671Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.100104745Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=114.144µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.100433801Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.100826255Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=392.535µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.101222626Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.101649977Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=427.15µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.102055225Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.10246859Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=413.074µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.102876252Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.10328684Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=410.328µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.103764104Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.104142091Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=378.128µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.104555505Z level=info msg="Executing migration" id="create anon_device table" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.104901002Z level=info msg="Migration successfully executed" id="create anon_device table" duration=345.587µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.105301722Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.105758276Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=455.683µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.106297917Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.106710208Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=410.739µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.107211487Z level=info msg="Executing migration" id="create signing_key table" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.107610724Z level=info msg="Migration successfully executed" id="create signing_key table" duration=399.147µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.108251303Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.108741811Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=490.728µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.10927041Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.109695385Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=425.045µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.110085596Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.110196234Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=110.887µs 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.110699796Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.113302036Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=2.602122ms 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.113750908Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-09T14:01:25.137 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.11406223Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=311.512µs 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.11451144Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.114938089Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=426.428µs 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.115710144Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.116096177Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=385.912µs 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.116519739Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.117044151Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=524.151µs 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.117612996Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.118178444Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=565.027µs 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.11870521Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.119229351Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=524.1µs 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.119774101Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.120253137Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=480.339µs 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.120903355Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.121349911Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=446.968µs 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.121887667Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.122096588Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=209.271µs 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.122767284Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.12286699Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=100.227µs 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.123435424Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.126084453Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.648829ms 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.126583567Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.129313166Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.729438ms 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.129861973Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.13010041Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=238.677µs 2026-03-09T14:01:25.138 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=migrator t=2026-03-09T14:01:25.130652111Z level=info msg="migrations completed" performed=547 skipped=0 duration=865.87842ms 2026-03-09T14:01:25.292 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:24 vm03 bash[88243]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0... 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=sqlstore t=2026-03-09T14:01:25.131340851Z level=info msg="Created default organization" 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=secrets t=2026-03-09T14:01:25.136339757Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=plugin.store t=2026-03-09T14:01:25.144860286Z level=info msg="Loading plugins..." 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=local.finder t=2026-03-09T14:01:25.184698594Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=plugin.store t=2026-03-09T14:01:25.184715917Z level=info msg="Plugins loaded" count=55 duration=39.855901ms 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=query_data t=2026-03-09T14:01:25.185949476Z level=info msg="Query Service initialization" 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=live.push_http t=2026-03-09T14:01:25.187259899Z level=info msg="Live Push Gateway initialization" 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ngalert.migration t=2026-03-09T14:01:25.188481655Z level=info msg=Starting 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ngalert.migration t=2026-03-09T14:01:25.188733086Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ngalert.migration orgID=1 t=2026-03-09T14:01:25.188968887Z level=info msg="Migrating alerts for organisation" 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ngalert.migration orgID=1 t=2026-03-09T14:01:25.189262416Z level=info msg="Alerts found to migrate" alerts=0 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ngalert.migration t=2026-03-09T14:01:25.189983727Z level=info msg="Completed alerting migration" 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ngalert.state.manager t=2026-03-09T14:01:25.197040475Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=infra.usagestats.collector t=2026-03-09T14:01:25.197901306Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=provisioning.datasources t=2026-03-09T14:01:25.19899307Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=provisioning.datasources t=2026-03-09T14:01:25.203405629Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=provisioning.alerting t=2026-03-09T14:01:25.207897497Z level=info msg="starting to provision alerting" 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=provisioning.alerting t=2026-03-09T14:01:25.207963211Z level=info msg="finished to provision alerting" 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=http.server t=2026-03-09T14:01:25.209005471Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=http.server t=2026-03-09T14:01:25.209240611Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ngalert.state.manager t=2026-03-09T14:01:25.209265007Z level=info msg="Warming state cache for startup" 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ngalert.state.manager t=2026-03-09T14:01:25.209426689Z level=info msg="State cache has been initialized" states=0 duration=161.282µs 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=provisioning.dashboard t=2026-03-09T14:01:25.211550194Z level=info msg="starting to provision dashboards" 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ngalert.multiorg.alertmanager t=2026-03-09T14:01:25.225275595Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-09T14:01:25.413 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ngalert.scheduler t=2026-03-09T14:01:25.225293258Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-09T14:01:25.414 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ticker t=2026-03-09T14:01:25.225306693Z level=info msg=starting first_tick=2026-03-09T14:01:30Z 2026-03-09T14:01:25.414 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=grafanaStorageLogger t=2026-03-09T14:01:25.247419864Z level=info msg="Storage starting" 2026-03-09T14:01:25.414 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=sqlstore.transactions t=2026-03-09T14:01:25.300729176Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-09T14:01:25.414 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=provisioning.dashboard t=2026-03-09T14:01:25.330130639Z level=info msg="finished to provision dashboards" 2026-03-09T14:01:25.414 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=plugins.update.checker t=2026-03-09T14:01:25.333709137Z level=info msg="Update check succeeded" duration=88.338679ms 2026-03-09T14:01:25.414 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:25 vm04 ceph-mon[54203]: Deploying daemon node-exporter.a on vm03 2026-03-09T14:01:25.740 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=grafana-apiserver t=2026-03-09T14:01:25.410817085Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-09T14:01:25.740 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:01:25 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=grafana-apiserver t=2026-03-09T14:01:25.412402041Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-09T14:01:25.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:25 vm03 ceph-mon[52586]: Deploying daemon node-exporter.a on vm03 2026-03-09T14:01:25.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:25 vm03 ceph-mon[58994]: Deploying daemon node-exporter.a on vm03 2026-03-09T14:01:26.340 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 bash[88243]: Getting image source signatures 2026-03-09T14:01:26.340 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 bash[88243]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24 2026-03-09T14:01:26.340 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 bash[88243]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510 2026-03-09T14:01:26.340 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 bash[88243]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a 2026-03-09T14:01:26.340 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-mon[52586]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:01:26.665 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:26 vm03 ceph-mon[58994]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:01:26.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:26 vm04 ceph-mon[54203]: pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T14:01:27.044 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88029]: ts=2026-03-09T14:01:26.736Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.004095478s 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 bash[88243]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 bash[88243]: Writing manifest to image destination 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 podman[88243]: 2026-03-09 14:01:26.831878332 +0000 UTC m=+1.943867768 container create 39896c1f6e86d38e085213eb6f26df39a76c76b7414d6c7dec0d0a1796b9d252 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 podman[88243]: 2026-03-09 14:01:26.85767596 +0000 UTC m=+1.969665396 container init 39896c1f6e86d38e085213eb6f26df39a76c76b7414d6c7dec0d0a1796b9d252 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 podman[88243]: 2026-03-09 14:01:26.860941362 +0000 UTC m=+1.972930788 container start 39896c1f6e86d38e085213eb6f26df39a76c76b7414d6c7dec0d0a1796b9d252 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 bash[88243]: 39896c1f6e86d38e085213eb6f26df39a76c76b7414d6c7dec0d0a1796b9d252 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 podman[88243]: 2026-03-09 14:01:26.826114212 +0000 UTC m=+1.938103648 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 systemd[1]: Started Ceph node-exporter.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.867Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.868Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.868Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.869Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.869Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.869Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.869Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.869Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.869Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.869Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T14:01:27.044 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.869Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.869Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.869Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.869Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.870Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T14:01:27.045 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T14:01:27.046 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:01:26 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a[88297]: ts=2026-03-09T14:01:26.871Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T14:01:27.437 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:27 vm04 systemd[1]: Starting Ceph node-exporter.b for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:01:27.741 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:27 vm04 bash[80290]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0... 2026-03-09T14:01:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:27 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:27 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:27 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:27 vm04 ceph-mon[54203]: Deploying daemon node-exporter.b on vm04 2026-03-09T14:01:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:27 vm04 ceph-mon[54203]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:01:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:27 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:27 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:27 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:27 vm03 ceph-mon[52586]: Deploying daemon node-exporter.b on vm04 2026-03-09T14:01:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:27 vm03 ceph-mon[52586]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:01:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:27 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:27 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:27 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:27 vm03 ceph-mon[58994]: Deploying daemon node-exporter.b on vm04 2026-03-09T14:01:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:27 vm03 ceph-mon[58994]: pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-09T14:01:29.241 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:28 vm04 bash[80290]: Getting image source signatures 2026-03-09T14:01:29.241 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:28 vm04 bash[80290]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24 2026-03-09T14:01:29.241 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:28 vm04 bash[80290]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510 2026-03-09T14:01:29.241 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:28 vm04 bash[80290]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a 2026-03-09T14:01:29.970 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 bash[80290]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e 2026-03-09T14:01:29.970 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 bash[80290]: Writing manifest to image destination 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 podman[80290]: 2026-03-09 14:01:29.656426006 +0000 UTC m=+2.134132374 container create 08aca6a47a5d5f219a044f84c99c683ad7481bfb33d7740c655016bb1af5cf87 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 podman[80290]: 2026-03-09 14:01:29.689821334 +0000 UTC m=+2.167527712 container init 08aca6a47a5d5f219a044f84c99c683ad7481bfb33d7740c655016bb1af5cf87 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 podman[80290]: 2026-03-09 14:01:29.692658254 +0000 UTC m=+2.170364622 container start 08aca6a47a5d5f219a044f84c99c683ad7481bfb33d7740c655016bb1af5cf87 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 bash[80290]: 08aca6a47a5d5f219a044f84c99c683ad7481bfb33d7740c655016bb1af5cf87 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 podman[80290]: 2026-03-09 14:01:29.649654782 +0000 UTC m=+2.127361150 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.695Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=arp 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=edac 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=os 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.696Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=stat 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=time 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=uname 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b[80355]: ts=2026-03-09T14:01:29.697Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-09T14:01:29.971 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:01:29 vm04 systemd[1]: Started Ceph node-exporter.b for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:01:30.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:30 vm04 ceph-mon[54203]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:30.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:30 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:30 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:30 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:30 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:30 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:30 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:30 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:01:30.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:01:30 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:01:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[52586]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:01:30.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[58994]: pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:30.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:30.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:30 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:01:31.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:31 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:31.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:31 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:31.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:31 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:31.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:31 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:31.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:31 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:31.360 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:31 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:31.361 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:31 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:31.361 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:31 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:31.361 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:31 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.147 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 systemd[1]: Stopping Ceph alertmanager.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:01:32.422 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88029]: ts=2026-03-09T14:01:32.144Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T14:01:32.422 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 podman[88802]: 2026-03-09 14:01:32.159099807 +0000 UTC m=+0.038495236 container died d37657d3b04e221dc81ddebf4c9d419334d3bc63975ba71180e16c7c36e4ef5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:32.422 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 podman[88802]: 2026-03-09 14:01:32.182030439 +0000 UTC m=+0.061425868 container remove d37657d3b04e221dc81ddebf4c9d419334d3bc63975ba71180e16c7c36e4ef5d (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:32.422 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 podman[88802]: 2026-03-09 14:01:32.183359356 +0000 UTC m=+0.062754785 volume remove 6da366211e10f383d581e367f7c4d25013b48587f64e757ae216fb95796a7cda 2026-03-09T14:01:32.422 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 bash[88802]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a 2026-03-09T14:01:32.422 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@alertmanager.a.service: Deactivated successfully. 2026-03-09T14:01:32.422 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 systemd[1]: Stopped Ceph alertmanager.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:01:32.422 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 systemd[1]: Starting Ceph alertmanager.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:01:32.422 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 podman[88869]: 2026-03-09 14:01:32.391100977 +0000 UTC m=+0.027619299 volume create 3a30cf6a270a6077aed3edc7cff43c1f1a549077125f64418c02d7d78def3116 2026-03-09T14:01:32.422 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 podman[88869]: 2026-03-09 14:01:32.39362451 +0000 UTC m=+0.030142832 container create 7278bf964c26bfe28a93e6c49c26421799ab166b8db8162a2112b7eeaa8fffd4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:32.792 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 podman[88869]: 2026-03-09 14:01:32.428059024 +0000 UTC m=+0.064577346 container init 7278bf964c26bfe28a93e6c49c26421799ab166b8db8162a2112b7eeaa8fffd4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 podman[88869]: 2026-03-09 14:01:32.430881267 +0000 UTC m=+0.067399589 container start 7278bf964c26bfe28a93e6c49c26421799ab166b8db8162a2112b7eeaa8fffd4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 bash[88869]: 7278bf964c26bfe28a93e6c49c26421799ab166b8db8162a2112b7eeaa8fffd4 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 podman[88869]: 2026-03-09 14:01:32.377694954 +0000 UTC m=+0.014213286 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 systemd[1]: Started Ceph alertmanager.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88879]: ts=2026-03-09T14:01:32.452Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88879]: ts=2026-03-09T14:01:32.452Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88879]: ts=2026-03-09T14:01:32.453Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.103 port=9094 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88879]: ts=2026-03-09T14:01:32.454Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88879]: ts=2026-03-09T14:01:32.491Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88879]: ts=2026-03-09T14:01:32.492Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88879]: ts=2026-03-09T14:01:32.494Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-09T14:01:32.793 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88879]: ts=2026-03-09T14:01:32.494Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[52586]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[52586]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[52586]: Reconfiguring daemon alertmanager.a on vm03 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[58994]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[58994]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[58994]: Reconfiguring daemon alertmanager.a on vm03 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:32 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:32 vm04 ceph-mon[54203]: pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:32.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:32 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:32 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:32 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:01:32.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:32 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:01:32.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:32 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:32 vm04 ceph-mon[54203]: Reconfiguring alertmanager.a (dependencies changed)... 2026-03-09T14:01:32.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:32 vm04 ceph-mon[54203]: Reconfiguring daemon alertmanager.a on vm03 2026-03-09T14:01:32.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:32 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:32.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:32 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.175 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 systemd[1]: Stopping Ceph prometheus.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.172Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.172Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.172Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.172Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.172Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.172Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.172Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.172Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.173Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.174Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.174Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[78553]: ts=2026-03-09T14:01:33.174Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 podman[80914]: 2026-03-09 14:01:33.18641783 +0000 UTC m=+0.028370043 container died 84dcf9eddbfd4c4f3deea27d0cb69f3a9119ce3c3c089904294389fddf7479e5 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 podman[80914]: 2026-03-09 14:01:33.204827519 +0000 UTC m=+0.046779732 container remove 84dcf9eddbfd4c4f3deea27d0cb69f3a9119ce3c3c089904294389fddf7479e5 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 bash[80914]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@prometheus.a.service: Deactivated successfully. 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 systemd[1]: Stopped Ceph prometheus.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 systemd[1]: Starting Ceph prometheus.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 podman[80979]: 2026-03-09 14:01:33.388582906 +0000 UTC m=+0.020154314 container create d979bccb1f857250e5a961543a2e091d95a48f3726f2c22cb69b66b8aa3a57d4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 podman[80979]: 2026-03-09 14:01:33.417895272 +0000 UTC m=+0.049466680 container init d979bccb1f857250e5a961543a2e091d95a48f3726f2c22cb69b66b8aa3a57d4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 podman[80979]: 2026-03-09 14:01:33.420740527 +0000 UTC m=+0.052311926 container start d979bccb1f857250e5a961543a2e091d95a48f3726f2c22cb69b66b8aa3a57d4 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a, maintainer=The Prometheus Authors ) 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 bash[80979]: d979bccb1f857250e5a961543a2e091d95a48f3726f2c22cb69b66b8aa3a57d4 2026-03-09T14:01:33.428 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 podman[80979]: 2026-03-09 14:01:33.379985133 +0000 UTC m=+0.011556541 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0 2026-03-09T14:01:33.480 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:01:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:01:33.742 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 systemd[1]: Started Ceph prometheus.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:01:33.742 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.450Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-09T14:01:33.742 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.452Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-09T14:01:33.742 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.452Z caller=main.go:623 level=info host_details="(Linux 5.14.0-686.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026 x86_64 vm04 (none))" 2026-03-09T14:01:33.742 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.452Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-09T14:01:33.742 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.452Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.455Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.457Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.459Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.459Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=12.083µs 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.460Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.459Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.460Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.460Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.463Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.463Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=220.142µs wal_replay_duration=2.911229ms wbl_replay_duration=150ns total_replay_duration=3.280452ms 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.466Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.466Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.466Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.476Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=10.324235ms db_storage=872ns remote_storage=871ns web_handler=270ns query_engine=291ns scrape=1.662441ms scrape_sd=59.663µs notify=6.743µs notify_sd=5.05µs rules=8.137942ms tracing=2.465µs 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.476Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-09T14:01:33.743 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:01:33 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:01:33.476Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: Reconfiguring daemon prometheus.a on vm04 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm04.local:9095"}]: dispatch 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-09T14:01:33.743 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:33 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.780 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:01:33.780 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: Reconfiguring daemon prometheus.a on vm04 2026-03-09T14:01:33.780 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.780 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.780 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:01:33.780 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-09T14:01:33.780 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.781 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:01:33.781 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm04.local:9095"}]: dispatch 2026-03-09T14:01:33.781 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.781 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:01:33.781 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-09T14:01:33.781 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.781 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:33] ENGINE Bus STOPPING 2026-03-09T14:01:33.781 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:33] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:01:33.781 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:33] ENGINE Bus STOPPED 2026-03-09T14:01:33.781 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:33] ENGINE Bus STARTING 2026-03-09T14:01:33.781 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:33] ENGINE Serving on http://:::9283 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: Reconfiguring prometheus.a (dependencies changed)... 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: Reconfiguring daemon prometheus.a on vm04 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm04.local:9095"}]: dispatch 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-09T14:01:33.782 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:33 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:34.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:33] ENGINE Bus STARTED 2026-03-09T14:01:34.042 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:33] ENGINE Bus STOPPING 2026-03-09T14:01:34.538 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:34 vm04 ceph-mon[54203]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:34.538 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:34 vm04 ceph-mon[54203]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:01:34.538 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:34 vm04 ceph-mon[54203]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-09T14:01:34.538 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:34 vm04 ceph-mon[54203]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:01:34.538 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:34 vm04 ceph-mon[54203]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm04.local:9095"}]: dispatch 2026-03-09T14:01:34.538 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:34 vm04 ceph-mon[54203]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:01:34.538 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:34 vm04 ceph-mon[54203]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-09T14:01:34.538 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:34 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:01:34.538 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:34 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:34.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[52586]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:34.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[52586]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:01:34.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[52586]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-09T14:01:34.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[52586]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:01:34.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[52586]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm04.local:9095"}]: dispatch 2026-03-09T14:01:34.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[52586]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:01:34.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[52586]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-09T14:01:34.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:34] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:34] ENGINE Bus STOPPED 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:34] ENGINE Bus STARTING 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:34] ENGINE Serving on http://:::9283 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:34] ENGINE Bus STARTED 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:34] ENGINE Bus STOPPING 2026-03-09T14:01:34.543 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88879]: ts=2026-03-09T14:01:34.455Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000964743s 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[58994]: pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[58994]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[58994]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm03.local:9093"}]: dispatch 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[58994]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[58994]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm04.local:9095"}]: dispatch 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[58994]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[58994]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm04.local:3000"}]: dispatch 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:01:34.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:34 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:35.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:34] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T14:01:35.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:34] ENGINE Bus STOPPED 2026-03-09T14:01:35.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:34] ENGINE Bus STARTING 2026-03-09T14:01:35.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:34] ENGINE Serving on http://:::9283 2026-03-09T14:01:35.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:34 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: [09/Mar/2026:14:01:34] ENGINE Bus STARTED 2026-03-09T14:01:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:35 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:35 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:35 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:35 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:35 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:01:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:35 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:01:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:35 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:01:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:01:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:36.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:36.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:36.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:36.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:36.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:01:36.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:01:36.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:35 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:01:36.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:36 vm04 ceph-mon[54203]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:37.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:36 vm03 ceph-mon[52586]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:37.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:36 vm03 ceph-mon[58994]: pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:38.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:38 vm04 ceph-mon[54203]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:38 vm03 ceph-mon[52586]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:38 vm03 ceph-mon[58994]: pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:39.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:39 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:40.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:39 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:40.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:39 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:40.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:01:40 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:01:40.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:40 vm04 ceph-mon[54203]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:41.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:40 vm03 ceph-mon[52586]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:41.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:40 vm03 ceph-mon[58994]: pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:41.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:41 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:41.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:41 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:41.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:41 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:42.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:42 vm03 ceph-mon[52586]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:42.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:42 vm03 ceph-mon[58994]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:42.792 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:01:42 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88879]: ts=2026-03-09T14:01:42.457Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003353783s 2026-03-09T14:01:42.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:42 vm04 ceph-mon[54203]: pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:43.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:01:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:01:45.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:44 vm04 ceph-mon[54203]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:45.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:44 vm03 ceph-mon[52586]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:45.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:44 vm03 ceph-mon[58994]: pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:46.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:45 vm04 ceph-mon[54203]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:45 vm03 ceph-mon[52586]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:46.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:45 vm03 ceph-mon[58994]: pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:48.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:48 vm03 ceph-mon[52586]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:48.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:48 vm03 ceph-mon[58994]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:48.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:48 vm04 ceph-mon[54203]: pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:50.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:50 vm03 ceph-mon[52586]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:50.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:50 vm03 ceph-mon[58994]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:50 vm04 ceph-mon[54203]: pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:01:50.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:01:50 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:01:51.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:51 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:51.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:51 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:51.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:51 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:01:52.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:52 vm03 ceph-mon[52586]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:52.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:52 vm03 ceph-mon[58994]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:52.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:52 vm04 ceph-mon[54203]: pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:53.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:01:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:01:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:01:54.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:54 vm03 ceph-mon[52586]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T14:01:54.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:54 vm03 ceph-mon[58994]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T14:01:54.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:54 vm04 ceph-mon[54203]: pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T14:01:55.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:55 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:55.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:55 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:55.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:55 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:01:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:56 vm04 ceph-mon[54203]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T14:01:56.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:56 vm03 ceph-mon[52586]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T14:01:56.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:56 vm03 ceph-mon[58994]: pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T14:01:58.452 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:01:58 vm04 ceph-mon[54203]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:01:58 vm03 ceph-mon[52586]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:01:58.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:01:58 vm03 ceph-mon[58994]: pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:00.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:02:00 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:02:00.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:00 vm04 ceph-mon[54203]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T14:02:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:00 vm03 ceph-mon[52586]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T14:02:00.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:00 vm03 ceph-mon[58994]: pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-09T14:02:01.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:01 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:01.444 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:01 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:01.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:01 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:02.454 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:02 vm03 ceph-mon[52586]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:02.454 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:02 vm03 ceph-mon[58994]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:02.490 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:02 vm04 ceph-mon[54203]: pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:03.543 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:02:03 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:02:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:02:04.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:04 vm04 ceph-mon[54203]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:02:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:04 vm03 ceph-mon[52586]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:02:04.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:04 vm03 ceph-mon[58994]: pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:02:06.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:06 vm04 ceph-mon[54203]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:02:06.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:06 vm03 ceph-mon[52586]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:02:06.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:06 vm03 ceph-mon[58994]: pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:02:08.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:08 vm04 ceph-mon[54203]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:08.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:08 vm03 ceph-mon[52586]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:08.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:08 vm03 ceph-mon[58994]: pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:09.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T14:02:09.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:09 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T14:02:09.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:09 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:09 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:09.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T14:02:09.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]: dispatch 2026-03-09T14:02:09.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:10.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:02:10 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:02:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:10 vm04 ceph-mon[54203]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:02:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]': finished 2026-03-09T14:02:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]': finished 2026-03-09T14:02:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]': finished 2026-03-09T14:02:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:10 vm04 ceph-mon[54203]: osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:02:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:10 vm03 ceph-mon[52586]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:02:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]': finished 2026-03-09T14:02:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]': finished 2026-03-09T14:02:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]': finished 2026-03-09T14:02:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:10 vm03 ceph-mon[52586]: osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:02:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:10 vm03 ceph-mon[58994]: pgmap v33: 132 pgs: 132 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-09T14:02:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.4", "id": [1, 5]}]': finished 2026-03-09T14:02:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]': finished 2026-03-09T14:02:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [1, 2]}]': finished 2026-03-09T14:02:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:10 vm03 ceph-mon[58994]: osdmap e57: 8 total, 8 up, 8 in 2026-03-09T14:02:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:11 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:11 vm04 ceph-mon[54203]: osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:02:11.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:11 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:11.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:11 vm03 ceph-mon[52586]: osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:02:11.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:11 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:11.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:11 vm03 ceph-mon[58994]: osdmap e58: 8 total, 8 up, 8 in 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr:state without impacting any branches by switching back to a branch. 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr: git switch -c 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr:Or undo this operation with: 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr: git switch - 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr: 2026-03-09T14:02:11.869 INFO:tasks.workunit.client.0.vm03.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-09T14:02:11.874 DEBUG:teuthology.orchestra.run.vm03:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-09T14:02:11.934 INFO:tasks.workunit.client.0.vm03.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-09T14:02:11.936 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T14:02:11.936 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-09T14:02:11.981 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-09T14:02:12.018 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-09T14:02:12.049 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-09T14:02:12.051 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T14:02:12.051 INFO:tasks.workunit.client.0.vm03.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-09T14:02:12.083 INFO:tasks.workunit.client.0.vm03.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-09T14:02:12.086 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T14:02:12.087 DEBUG:teuthology.orchestra.run.vm03:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-09T14:02:12.149 INFO:tasks.workunit:Running workunits matching rados/test_python.sh on client.0... 2026-03-09T14:02:12.149 INFO:tasks.workunit:Running workunit rados/test_python.sh... 2026-03-09T14:02:12.149 DEBUG:teuthology.orchestra.run.vm03:workunit test rados/test_python.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh 2026-03-09T14:02:12.216 INFO:tasks.workunit.client.0.vm03.stderr:+ ceph osd pool create rbd 2026-03-09T14:02:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:12 vm04 ceph-mon[54203]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:12 vm04 ceph-mon[54203]: Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T14:02:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:12 vm03 ceph-mon[52586]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:12 vm03 ceph-mon[52586]: Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T14:02:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:12 vm03 ceph-mon[58994]: pgmap v36: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:12 vm03 ceph-mon[58994]: Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY) 2026-03-09T14:02:13.220 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:02:13 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:02:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:02:13.402 INFO:tasks.workunit.client.0.vm03.stderr:pool 'rbd' already exists 2026-03-09T14:02:13.413 INFO:tasks.workunit.client.0.vm03.stderr:++ dirname /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh 2026-03-09T14:02:13.414 INFO:tasks.workunit.client.0.vm03.stderr:+ python3 -m pytest -v /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/../../../src/test/pybind/test_rados.py 2026-03-09T14:02:13.502 INFO:tasks.workunit.client.0.vm03.stdout:============================= test session starts ============================== 2026-03-09T14:02:13.502 INFO:tasks.workunit.client.0.vm03.stdout:platform linux -- Python 3.9.25, pytest-6.2.2, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3 2026-03-09T14:02:13.502 INFO:tasks.workunit.client.0.vm03.stdout:cachedir: .pytest_cache 2026-03-09T14:02:13.502 INFO:tasks.workunit.client.0.vm03.stdout:rootdir: /home/ubuntu/cephtest/clone.client.0/src/test/pybind, configfile: pytest.ini 2026-03-09T14:02:13.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:13 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1141001604' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:13.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:13 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:13.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:13 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1141001604' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:13.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:13 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:13.689 INFO:tasks.workunit.client.0.vm03.stdout:collecting ... collected 91 items 2026-03-09T14:02:13.689 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-09T14:02:13.695 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_rados_init_error PASSED [ 1%] 2026-03-09T14:02:13.733 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_rados_init PASSED [ 2%] 2026-03-09T14:02:13.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:13 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1141001604' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:13.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:13 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:13.745 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_ioctx_context_manager PASSED [ 3%] 2026-03-09T14:02:13.751 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_parse_argv PASSED [ 4%] 2026-03-09T14:02:13.755 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::test_parse_argv_empty_str PASSED [ 5%] 2026-03-09T14:02:13.760 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_configuring PASSED [ 6%] 2026-03-09T14:02:13.772 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_connected PASSED [ 7%] 2026-03-09T14:02:13.784 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRadosStateError::test_shutdown PASSED [ 8%] 2026-03-09T14:02:13.800 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_ping_monitor PASSED [ 9%] 2026-03-09T14:02:13.813 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_annotations PASSED [ 10%] 2026-03-09T14:02:14.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:14 vm04 ceph-mon[54203]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:14.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:14 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T14:02:14.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:14 vm04 ceph-mon[54203]: osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:02:14.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:14 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1141001604' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:14.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:14 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:14.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:14 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1443226124' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[52586]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[52586]: osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1141001604' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1443226124' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[58994]: pgmap v37: 132 pgs: 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "rbd"}]': finished 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[58994]: osdmap e59: 8 total, 8 up, 8 in 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1141001604' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "rbd"}]: dispatch 2026-03-09T14:02:14.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:14 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1443226124' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:02:15.365 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_create PASSED [ 12%] 2026-03-09T14:02:15.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:15 vm04 ceph-mon[54203]: osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:02:15.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:15 vm04 ceph-mon[54203]: osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:02:15.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:15 vm03 ceph-mon[52586]: osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:02:15.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:15 vm03 ceph-mon[52586]: osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:02:15.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:15 vm03 ceph-mon[58994]: osdmap e60: 8 total, 8 up, 8 in 2026-03-09T14:02:15.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:15 vm03 ceph-mon[58994]: osdmap e61: 8 total, 8 up, 8 in 2026-03-09T14:02:16.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:16 vm04 ceph-mon[54203]: pgmap v40: 196 pgs: 64 unknown, 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 889 B/s rd, 0 op/s 2026-03-09T14:02:16.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:16 vm04 ceph-mon[54203]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:16.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:16 vm04 ceph-mon[54203]: osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:02:16.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:16 vm03 ceph-mon[52586]: pgmap v40: 196 pgs: 64 unknown, 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 889 B/s rd, 0 op/s 2026-03-09T14:02:16.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:16 vm03 ceph-mon[52586]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:16.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:16 vm03 ceph-mon[52586]: osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:02:16.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:16 vm03 ceph-mon[58994]: pgmap v40: 196 pgs: 64 unknown, 3 peering, 129 active+clean; 455 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 889 B/s rd, 0 op/s 2026-03-09T14:02:16.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:16 vm03 ceph-mon[58994]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:16.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:16 vm03 ceph-mon[58994]: osdmap e62: 8 total, 8 up, 8 in 2026-03-09T14:02:17.370 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_create_utf8 PASSED [ 13%] 2026-03-09T14:02:18.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:18 vm04 ceph-mon[54203]: pgmap v43: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 116 B/s, 4 objects/s recovering 2026-03-09T14:02:18.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:18 vm04 ceph-mon[54203]: osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:02:18.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:18 vm04 ceph-mon[54203]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T14:02:18.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:18 vm03 ceph-mon[52586]: pgmap v43: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 116 B/s, 4 objects/s recovering 2026-03-09T14:02:18.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:18 vm03 ceph-mon[52586]: osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:02:18.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:18 vm03 ceph-mon[52586]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T14:02:18.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:18 vm03 ceph-mon[58994]: pgmap v43: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 116 B/s, 4 objects/s recovering 2026-03-09T14:02:18.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:18 vm03 ceph-mon[58994]: osdmap e63: 8 total, 8 up, 8 in 2026-03-09T14:02:18.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:18 vm03 ceph-mon[58994]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 2 pgs peering) 2026-03-09T14:02:19.381 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_pool_lookup_utf8 PASSED [ 14%] 2026-03-09T14:02:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:19 vm04 ceph-mon[54203]: osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:02:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:19 vm03 ceph-mon[52586]: osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:02:19.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:19 vm03 ceph-mon[58994]: osdmap e64: 8 total, 8 up, 8 in 2026-03-09T14:02:20.421 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:02:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:02:20.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:20 vm04 ceph-mon[54203]: pgmap v46: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 116 B/s, 4 objects/s recovering 2026-03-09T14:02:20.808 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:20 vm04 ceph-mon[54203]: osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:02:20.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:20 vm03 ceph-mon[52586]: pgmap v46: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 116 B/s, 4 objects/s recovering 2026-03-09T14:02:20.808 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:20 vm03 ceph-mon[52586]: osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:02:20.808 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:20 vm03 ceph-mon[58994]: pgmap v46: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 116 B/s, 4 objects/s recovering 2026-03-09T14:02:20.808 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:20 vm03 ceph-mon[58994]: osdmap e65: 8 total, 8 up, 8 in 2026-03-09T14:02:21.412 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_eexist PASSED [ 15%] 2026-03-09T14:02:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:21 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:21 vm04 ceph-mon[54203]: osdmap e66: 8 total, 8 up, 8 in 2026-03-09T14:02:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:21 vm04 ceph-mon[54203]: osdmap e67: 8 total, 8 up, 8 in 2026-03-09T14:02:21.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:21 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:21.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:21 vm03 ceph-mon[52586]: osdmap e66: 8 total, 8 up, 8 in 2026-03-09T14:02:21.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:21 vm03 ceph-mon[52586]: osdmap e67: 8 total, 8 up, 8 in 2026-03-09T14:02:21.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:21 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:21.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:21 vm03 ceph-mon[58994]: osdmap e66: 8 total, 8 up, 8 in 2026-03-09T14:02:21.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:21 vm03 ceph-mon[58994]: osdmap e67: 8 total, 8 up, 8 in 2026-03-09T14:02:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:22 vm04 ceph-mon[54203]: pgmap v49: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:22 vm04 ceph-mon[54203]: osdmap e68: 8 total, 8 up, 8 in 2026-03-09T14:02:22.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:22 vm03 ceph-mon[52586]: pgmap v49: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:22.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:22 vm03 ceph-mon[52586]: osdmap e68: 8 total, 8 up, 8 in 2026-03-09T14:02:22.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:22 vm03 ceph-mon[58994]: pgmap v49: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:22.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:22 vm03 ceph-mon[58994]: osdmap e68: 8 total, 8 up, 8 in 2026-03-09T14:02:23.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:02:23 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:02:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:02:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:24 vm04 ceph-mon[54203]: pgmap v52: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:24 vm04 ceph-mon[54203]: osdmap e69: 8 total, 8 up, 8 in 2026-03-09T14:02:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:24 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:24 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:24.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:24 vm03 ceph-mon[52586]: pgmap v52: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:24.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:24 vm03 ceph-mon[52586]: osdmap e69: 8 total, 8 up, 8 in 2026-03-09T14:02:24.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:24 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:24.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:24 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:24.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:24 vm03 ceph-mon[58994]: pgmap v52: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:24.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:24 vm03 ceph-mon[58994]: osdmap e69: 8 total, 8 up, 8 in 2026-03-09T14:02:24.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:24 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:24.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:24 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:25.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:25 vm03 ceph-mon[58994]: osdmap e70: 8 total, 8 up, 8 in 2026-03-09T14:02:25.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:25 vm03 ceph-mon[52586]: osdmap e70: 8 total, 8 up, 8 in 2026-03-09T14:02:25.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:25 vm04 ceph-mon[54203]: osdmap e70: 8 total, 8 up, 8 in 2026-03-09T14:02:26.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:26 vm04 ceph-mon[54203]: pgmap v55: 260 pgs: 96 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:26.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:26 vm04 ceph-mon[54203]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:26.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:26 vm04 ceph-mon[54203]: osdmap e71: 8 total, 8 up, 8 in 2026-03-09T14:02:27.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:26 vm03 ceph-mon[52586]: pgmap v55: 260 pgs: 96 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:27.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:26 vm03 ceph-mon[52586]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:27.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:26 vm03 ceph-mon[52586]: osdmap e71: 8 total, 8 up, 8 in 2026-03-09T14:02:27.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:26 vm03 ceph-mon[58994]: pgmap v55: 260 pgs: 96 unknown, 164 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:27.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:26 vm03 ceph-mon[58994]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:27.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:26 vm03 ceph-mon[58994]: osdmap e71: 8 total, 8 up, 8 in 2026-03-09T14:02:27.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:27 vm04 ceph-mon[54203]: osdmap e72: 8 total, 8 up, 8 in 2026-03-09T14:02:27.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:27 vm04 ceph-mon[54203]: osdmap e73: 8 total, 8 up, 8 in 2026-03-09T14:02:28.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:27 vm03 ceph-mon[52586]: osdmap e72: 8 total, 8 up, 8 in 2026-03-09T14:02:28.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:27 vm03 ceph-mon[52586]: osdmap e73: 8 total, 8 up, 8 in 2026-03-09T14:02:28.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:27 vm03 ceph-mon[58994]: osdmap e72: 8 total, 8 up, 8 in 2026-03-09T14:02:28.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:27 vm03 ceph-mon[58994]: osdmap e73: 8 total, 8 up, 8 in 2026-03-09T14:02:28.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:28 vm04 ceph-mon[54203]: pgmap v58: 196 pgs: 196 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:28.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:28 vm04 ceph-mon[54203]: osdmap e74: 8 total, 8 up, 8 in 2026-03-09T14:02:29.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:28 vm03 ceph-mon[52586]: pgmap v58: 196 pgs: 196 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:29.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:28 vm03 ceph-mon[52586]: osdmap e74: 8 total, 8 up, 8 in 2026-03-09T14:02:29.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:28 vm03 ceph-mon[58994]: pgmap v58: 196 pgs: 196 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:29.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:28 vm03 ceph-mon[58994]: osdmap e74: 8 total, 8 up, 8 in 2026-03-09T14:02:29.677 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_list_pools PASSED [ 16%] 2026-03-09T14:02:30.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:02:30 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:02:30.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:30 vm04 ceph-mon[54203]: pgmap v61: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:30.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:30 vm04 ceph-mon[54203]: osdmap e75: 8 total, 8 up, 8 in 2026-03-09T14:02:31.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:30 vm03 ceph-mon[52586]: pgmap v61: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:31.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:30 vm03 ceph-mon[52586]: osdmap e75: 8 total, 8 up, 8 in 2026-03-09T14:02:31.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:30 vm03 ceph-mon[58994]: pgmap v61: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:31.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:30 vm03 ceph-mon[58994]: osdmap e75: 8 total, 8 up, 8 in 2026-03-09T14:02:32.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:31 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:32.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:31 vm04 ceph-mon[54203]: osdmap e76: 8 total, 8 up, 8 in 2026-03-09T14:02:32.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:31 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:31 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:31 vm03 ceph-mon[52586]: osdmap e76: 8 total, 8 up, 8 in 2026-03-09T14:02:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:31 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:31 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:31 vm03 ceph-mon[58994]: osdmap e76: 8 total, 8 up, 8 in 2026-03-09T14:02:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:31 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:33.107 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[52586]: pgmap v64: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:33.108 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[52586]: osdmap e77: 8 total, 8 up, 8 in 2026-03-09T14:02:33.108 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/981071650' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T14:02:33.108 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T14:02:33.108 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T14:02:33.108 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[52586]: osdmap e78: 8 total, 8 up, 8 in 2026-03-09T14:02:33.108 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[58994]: pgmap v64: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:33.108 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[58994]: osdmap e77: 8 total, 8 up, 8 in 2026-03-09T14:02:33.108 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/981071650' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T14:02:33.108 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T14:02:33.108 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T14:02:33.108 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:32 vm03 ceph-mon[58994]: osdmap e78: 8 total, 8 up, 8 in 2026-03-09T14:02:33.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:32 vm04 ceph-mon[54203]: pgmap v64: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:33.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:32 vm04 ceph-mon[54203]: osdmap e77: 8 total, 8 up, 8 in 2026-03-09T14:02:33.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:32 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/981071650' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T14:02:33.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:32 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]: dispatch 2026-03-09T14:02:33.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:32 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier add", "pool": "foo", "tierpool": "foo-cache", "force_nonempty": ""}]': finished 2026-03-09T14:02:33.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:32 vm04 ceph-mon[54203]: osdmap e78: 8 total, 8 up, 8 in 2026-03-09T14:02:33.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:02:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:02:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:02:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:33 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/981071650' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:02:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:33 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:02:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:33 vm04 ceph-mon[54203]: pgmap v67: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:33 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:02:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:33 vm04 ceph-mon[54203]: osdmap e79: 8 total, 8 up, 8 in 2026-03-09T14:02:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:33 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/981071650' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T14:02:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:33 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/981071650' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[52586]: pgmap v67: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[52586]: osdmap e79: 8 total, 8 up, 8 in 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/981071650' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/981071650' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[58994]: pgmap v67: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier cache-mode", "pool": "foo-cache", "tierpool": "foo-cache", "mode": "readonly", "yes_i_really_mean_it": true}]': finished 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[58994]: osdmap e79: 8 total, 8 up, 8 in 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/981071650' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T14:02:34.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:33 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]: dispatch 2026-03-09T14:02:35.851 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:35 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T14:02:35.851 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:35 vm04 ceph-mon[54203]: osdmap e80: 8 total, 8 up, 8 in 2026-03-09T14:02:35.852 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:35 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:02:35.852 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:35 vm04 ceph-mon[54203]: pgmap v70: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:35 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T14:02:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:35 vm03 ceph-mon[52586]: osdmap e80: 8 total, 8 up, 8 in 2026-03-09T14:02:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:35 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:02:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:35 vm03 ceph-mon[52586]: pgmap v70: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:36.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:35 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd tier remove", "pool": "foo", "tierpool": "foo-cache"}]': finished 2026-03-09T14:02:36.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:35 vm03 ceph-mon[58994]: osdmap e80: 8 total, 8 up, 8 in 2026-03-09T14:02:36.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:35 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:02:36.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:35 vm03 ceph-mon[58994]: pgmap v70: 228 pgs: 64 unknown, 164 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:36.834 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_pool_base_tier PASSED [ 17%] 2026-03-09T14:02:36.847 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_fsid PASSED [ 18%] 2026-03-09T14:02:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:36 vm04 ceph-mon[54203]: osdmap e81: 8 total, 8 up, 8 in 2026-03-09T14:02:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:36 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:36 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:36 vm04 ceph-mon[54203]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:36 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:36 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:36 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:02:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:36 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:02:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:36 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[52586]: osdmap e81: 8 total, 8 up, 8 in 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[52586]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[58994]: osdmap e81: 8 total, 8 up, 8 in 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[58994]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:02:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:36 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:02:37.829 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_blocklist_add PASSED [ 19%] 2026-03-09T14:02:37.845 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_get_cluster_stats PASSED [ 20%] 2026-03-09T14:02:37.859 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestRados::test_monitor_log PASSED [ 21%] 2026-03-09T14:02:38.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:37 vm04 ceph-mon[54203]: osdmap e82: 8 total, 8 up, 8 in 2026-03-09T14:02:38.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:37 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4268249118' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T14:02:38.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:37 vm04 ceph-mon[54203]: pgmap v73: 164 pgs: 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:38.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:37 vm03 ceph-mon[52586]: osdmap e82: 8 total, 8 up, 8 in 2026-03-09T14:02:38.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:37 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4268249118' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T14:02:38.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:37 vm03 ceph-mon[52586]: pgmap v73: 164 pgs: 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:38.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:37 vm03 ceph-mon[58994]: osdmap e82: 8 total, 8 up, 8 in 2026-03-09T14:02:38.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:37 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4268249118' entity='client.admin' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]: dispatch 2026-03-09T14:02:38.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:37 vm03 ceph-mon[58994]: pgmap v73: 164 pgs: 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:39.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:38 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4268249118' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T14:02:39.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:38 vm04 ceph-mon[54203]: osdmap e83: 8 total, 8 up, 8 in 2026-03-09T14:02:39.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:38 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4268249118' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T14:02:39.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:38 vm03 ceph-mon[52586]: osdmap e83: 8 total, 8 up, 8 in 2026-03-09T14:02:39.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:38 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4268249118' entity='client.admin' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "1.2.3.4/123", "expire": 1.0}]': finished 2026-03-09T14:02:39.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:38 vm03 ceph-mon[58994]: osdmap e83: 8 total, 8 up, 8 in 2026-03-09T14:02:40.241 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:02:40 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:02:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:39 vm04 ceph-mon[54203]: osdmap e84: 8 total, 8 up, 8 in 2026-03-09T14:02:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:39 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3863338938' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:39 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:39 vm04 ceph-mon[54203]: pgmap v76: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:39 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:39 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:39 vm04 ceph-mon[54203]: osdmap e85: 8 total, 8 up, 8 in 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[52586]: osdmap e84: 8 total, 8 up, 8 in 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3863338938' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[52586]: pgmap v76: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[52586]: osdmap e85: 8 total, 8 up, 8 in 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[58994]: osdmap e84: 8 total, 8 up, 8 in 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3863338938' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[58994]: pgmap v76: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 222 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:39 vm03 ceph-mon[58994]: osdmap e85: 8 total, 8 up, 8 in 2026-03-09T14:02:40.857 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_last_version PASSED [ 23%] 2026-03-09T14:02:41.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:40 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:41.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:40 vm04 ceph-mon[54203]: osdmap e86: 8 total, 8 up, 8 in 2026-03-09T14:02:41.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:40 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:41.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:40 vm03 ceph-mon[52586]: osdmap e86: 8 total, 8 up, 8 in 2026-03-09T14:02:41.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:40 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:41.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:40 vm03 ceph-mon[58994]: osdmap e86: 8 total, 8 up, 8 in 2026-03-09T14:02:42.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:41 vm04 ceph-mon[54203]: pgmap v79: 164 pgs: 164 active+clean; 455 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:42.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:41 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:42.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:41 vm04 ceph-mon[54203]: osdmap e87: 8 total, 8 up, 8 in 2026-03-09T14:02:42.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:41 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2073519238' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:42.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:41 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:42.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:41 vm03 ceph-mon[52586]: pgmap v79: 164 pgs: 164 active+clean; 455 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:42.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:41 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:42.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:41 vm03 ceph-mon[52586]: osdmap e87: 8 total, 8 up, 8 in 2026-03-09T14:02:42.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:41 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2073519238' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:42.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:41 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:42.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:41 vm03 ceph-mon[58994]: pgmap v79: 164 pgs: 164 active+clean; 455 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:42.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:41 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:02:42.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:41 vm03 ceph-mon[58994]: osdmap e87: 8 total, 8 up, 8 in 2026-03-09T14:02:42.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:41 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2073519238' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:42.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:41 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:43.061 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_stats PASSED [ 24%] 2026-03-09T14:02:43.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:43 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:43.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:43 vm04 ceph-mon[54203]: osdmap e88: 8 total, 8 up, 8 in 2026-03-09T14:02:43.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:02:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:02:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:02:43.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:43 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:43.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:43 vm03 ceph-mon[52586]: osdmap e88: 8 total, 8 up, 8 in 2026-03-09T14:02:43.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:43 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:43.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:43 vm03 ceph-mon[58994]: osdmap e88: 8 total, 8 up, 8 in 2026-03-09T14:02:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:44 vm04 ceph-mon[54203]: pgmap v82: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:44 vm04 ceph-mon[54203]: osdmap e89: 8 total, 8 up, 8 in 2026-03-09T14:02:44.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:44 vm03 ceph-mon[52586]: pgmap v82: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:44.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:44 vm03 ceph-mon[52586]: osdmap e89: 8 total, 8 up, 8 in 2026-03-09T14:02:44.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:44 vm03 ceph-mon[58994]: pgmap v82: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:44.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:44 vm03 ceph-mon[58994]: osdmap e89: 8 total, 8 up, 8 in 2026-03-09T14:02:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:45 vm04 ceph-mon[54203]: osdmap e90: 8 total, 8 up, 8 in 2026-03-09T14:02:45.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:45 vm03 ceph-mon[52586]: osdmap e90: 8 total, 8 up, 8 in 2026-03-09T14:02:45.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:45 vm03 ceph-mon[58994]: osdmap e90: 8 total, 8 up, 8 in 2026-03-09T14:02:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:46 vm04 ceph-mon[54203]: pgmap v85: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 226 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:46 vm04 ceph-mon[54203]: osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:02:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:46 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3288322226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:46 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:46 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:46 vm04 ceph-mon[54203]: osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:02:46.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[52586]: pgmap v85: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 226 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:46.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[52586]: osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:02:46.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3288322226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:46.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:46.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:46.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[52586]: osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:02:46.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[58994]: pgmap v85: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 226 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:46.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[58994]: osdmap e91: 8 total, 8 up, 8 in 2026-03-09T14:02:46.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3288322226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:46.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:46.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:46.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:46 vm03 ceph-mon[58994]: osdmap e92: 8 total, 8 up, 8 in 2026-03-09T14:02:47.159 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write PASSED [ 25%] 2026-03-09T14:02:48.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:48 vm04 ceph-mon[54203]: pgmap v88: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:48.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:48 vm04 ceph-mon[54203]: osdmap e93: 8 total, 8 up, 8 in 2026-03-09T14:02:48.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:48 vm03 ceph-mon[52586]: pgmap v88: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:48.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:48 vm03 ceph-mon[52586]: osdmap e93: 8 total, 8 up, 8 in 2026-03-09T14:02:48.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:48 vm03 ceph-mon[58994]: pgmap v88: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:48.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:48 vm03 ceph-mon[58994]: osdmap e93: 8 total, 8 up, 8 in 2026-03-09T14:02:49.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:49 vm04 ceph-mon[54203]: osdmap e94: 8 total, 8 up, 8 in 2026-03-09T14:02:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:49 vm03 ceph-mon[52586]: osdmap e94: 8 total, 8 up, 8 in 2026-03-09T14:02:49.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:49 vm03 ceph-mon[58994]: osdmap e94: 8 total, 8 up, 8 in 2026-03-09T14:02:50.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:02:50 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:02:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:50 vm04 ceph-mon[54203]: pgmap v91: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:50 vm04 ceph-mon[54203]: osdmap e95: 8 total, 8 up, 8 in 2026-03-09T14:02:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:50 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/981014591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:50 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:50 vm03 ceph-mon[52586]: pgmap v91: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:50 vm03 ceph-mon[52586]: osdmap e95: 8 total, 8 up, 8 in 2026-03-09T14:02:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:50 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/981014591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:50.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:50 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:50.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:50 vm03 ceph-mon[58994]: pgmap v91: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 239 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:50.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:50 vm03 ceph-mon[58994]: osdmap e95: 8 total, 8 up, 8 in 2026-03-09T14:02:50.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:50 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/981014591' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:50.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:50 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:51.205 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_full PASSED [ 26%] 2026-03-09T14:02:51.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:51 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:51.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:51 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:51.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:51 vm04 ceph-mon[54203]: osdmap e96: 8 total, 8 up, 8 in 2026-03-09T14:02:51.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:51 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:51.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:51 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:51.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:51 vm03 ceph-mon[52586]: osdmap e96: 8 total, 8 up, 8 in 2026-03-09T14:02:51.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:51 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:02:51.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:51 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:51.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:51 vm03 ceph-mon[58994]: osdmap e96: 8 total, 8 up, 8 in 2026-03-09T14:02:52.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:52 vm04 ceph-mon[54203]: pgmap v94: 196 pgs: 196 active+clean; 455 KiB data, 293 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:02:52.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:52 vm04 ceph-mon[54203]: osdmap e97: 8 total, 8 up, 8 in 2026-03-09T14:02:52.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:52 vm03 ceph-mon[52586]: pgmap v94: 196 pgs: 196 active+clean; 455 KiB data, 293 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:02:52.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:52 vm03 ceph-mon[52586]: osdmap e97: 8 total, 8 up, 8 in 2026-03-09T14:02:52.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:52 vm03 ceph-mon[58994]: pgmap v94: 196 pgs: 196 active+clean; 455 KiB data, 293 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:02:52.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:52 vm03 ceph-mon[58994]: osdmap e97: 8 total, 8 up, 8 in 2026-03-09T14:02:53.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:53 vm04 ceph-mon[54203]: osdmap e98: 8 total, 8 up, 8 in 2026-03-09T14:02:53.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:02:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:02:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:02:53.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:53 vm03 ceph-mon[52586]: osdmap e98: 8 total, 8 up, 8 in 2026-03-09T14:02:53.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:53 vm03 ceph-mon[58994]: osdmap e98: 8 total, 8 up, 8 in 2026-03-09T14:02:54.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:54 vm04 ceph-mon[54203]: pgmap v97: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 293 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:54.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:54 vm04 ceph-mon[54203]: osdmap e99: 8 total, 8 up, 8 in 2026-03-09T14:02:54.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:54 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2541387938' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:54.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:54 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:54.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:54 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:54.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:54 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:54.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:54 vm04 ceph-mon[54203]: osdmap e100: 8 total, 8 up, 8 in 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[52586]: pgmap v97: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 293 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[52586]: osdmap e99: 8 total, 8 up, 8 in 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2541387938' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[52586]: osdmap e100: 8 total, 8 up, 8 in 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[58994]: pgmap v97: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 293 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[58994]: osdmap e99: 8 total, 8 up, 8 in 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2541387938' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:54.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:54 vm03 ceph-mon[58994]: osdmap e100: 8 total, 8 up, 8 in 2026-03-09T14:02:55.217 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_writesame PASSED [ 27%] 2026-03-09T14:02:55.741 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:02:55 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=infra.usagestats t=2026-03-09T14:02:55.251418187Z level=info msg="Usage stats are ready to report" 2026-03-09T14:02:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:56 vm04 ceph-mon[54203]: pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 293 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:56.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:56 vm04 ceph-mon[54203]: osdmap e101: 8 total, 8 up, 8 in 2026-03-09T14:02:56.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:56 vm03 ceph-mon[52586]: pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 293 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:56.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:56 vm03 ceph-mon[52586]: osdmap e101: 8 total, 8 up, 8 in 2026-03-09T14:02:56.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:56 vm03 ceph-mon[58994]: pgmap v100: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 293 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:02:56.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:56 vm03 ceph-mon[58994]: osdmap e101: 8 total, 8 up, 8 in 2026-03-09T14:02:57.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:57 vm03 ceph-mon[52586]: osdmap e102: 8 total, 8 up, 8 in 2026-03-09T14:02:57.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:57 vm03 ceph-mon[58994]: osdmap e102: 8 total, 8 up, 8 in 2026-03-09T14:02:57.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:57 vm04 ceph-mon[54203]: osdmap e102: 8 total, 8 up, 8 in 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[52586]: pgmap v103: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 315 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[52586]: osdmap e103: 8 total, 8 up, 8 in 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1640925610' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[52586]: osdmap e104: 8 total, 8 up, 8 in 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[58994]: pgmap v103: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 315 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[58994]: osdmap e103: 8 total, 8 up, 8 in 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1640925610' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:58.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:02:58 vm03 ceph-mon[58994]: osdmap e104: 8 total, 8 up, 8 in 2026-03-09T14:02:58.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:58 vm04 ceph-mon[54203]: pgmap v103: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 315 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:02:58.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:58 vm04 ceph-mon[54203]: osdmap e103: 8 total, 8 up, 8 in 2026-03-09T14:02:58.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:58 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1640925610' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:58.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:58 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:02:58.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:58 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:02:58.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:02:58 vm04 ceph-mon[54203]: osdmap e104: 8 total, 8 up, 8 in 2026-03-09T14:02:59.238 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_append PASSED [ 28%] 2026-03-09T14:03:00.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:03:00 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:03:00.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:00 vm04 ceph-mon[54203]: pgmap v106: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 315 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:00.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:00 vm04 ceph-mon[54203]: osdmap e105: 8 total, 8 up, 8 in 2026-03-09T14:03:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:00 vm03 ceph-mon[52586]: pgmap v106: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 315 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:00 vm03 ceph-mon[52586]: osdmap e105: 8 total, 8 up, 8 in 2026-03-09T14:03:00.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:00 vm03 ceph-mon[58994]: pgmap v106: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 315 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:00.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:00 vm03 ceph-mon[58994]: osdmap e105: 8 total, 8 up, 8 in 2026-03-09T14:03:01.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:01 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:01.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:01 vm03 ceph-mon[52586]: osdmap e106: 8 total, 8 up, 8 in 2026-03-09T14:03:01.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:01 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:01.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:01 vm03 ceph-mon[58994]: osdmap e106: 8 total, 8 up, 8 in 2026-03-09T14:03:01.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:01 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:01.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:01 vm04 ceph-mon[54203]: osdmap e106: 8 total, 8 up, 8 in 2026-03-09T14:03:02.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:02 vm03 ceph-mon[52586]: pgmap v109: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 323 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:02.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:02 vm03 ceph-mon[52586]: osdmap e107: 8 total, 8 up, 8 in 2026-03-09T14:03:02.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:02 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2108773679' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:02.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:02 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:02.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:02 vm03 ceph-mon[58994]: pgmap v109: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 323 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:02.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:02 vm03 ceph-mon[58994]: osdmap e107: 8 total, 8 up, 8 in 2026-03-09T14:03:02.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:02 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2108773679' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:02.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:02 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:02.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:02 vm04 ceph-mon[54203]: pgmap v109: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 323 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:02.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:02 vm04 ceph-mon[54203]: osdmap e107: 8 total, 8 up, 8 in 2026-03-09T14:03:02.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:02 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2108773679' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:02.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:02 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:03.258 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_zeros PASSED [ 29%] 2026-03-09T14:03:03.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:03:03 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:03:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:03:03.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:03 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:03.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:03 vm03 ceph-mon[52586]: osdmap e108: 8 total, 8 up, 8 in 2026-03-09T14:03:03.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:03 vm03 ceph-mon[52586]: osdmap e109: 8 total, 8 up, 8 in 2026-03-09T14:03:03.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:03 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:03.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:03 vm03 ceph-mon[58994]: osdmap e108: 8 total, 8 up, 8 in 2026-03-09T14:03:03.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:03 vm03 ceph-mon[58994]: osdmap e109: 8 total, 8 up, 8 in 2026-03-09T14:03:03.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:03 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:03.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:03 vm04 ceph-mon[54203]: osdmap e108: 8 total, 8 up, 8 in 2026-03-09T14:03:03.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:03 vm04 ceph-mon[54203]: osdmap e109: 8 total, 8 up, 8 in 2026-03-09T14:03:04.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:04 vm04 ceph-mon[54203]: pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 323 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:04.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:04 vm03 ceph-mon[52586]: pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 323 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:04.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:04 vm03 ceph-mon[58994]: pgmap v112: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 323 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:05.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:05 vm04 ceph-mon[54203]: osdmap e110: 8 total, 8 up, 8 in 2026-03-09T14:03:05.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:05 vm04 ceph-mon[54203]: osdmap e111: 8 total, 8 up, 8 in 2026-03-09T14:03:05.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:05 vm03 ceph-mon[52586]: osdmap e110: 8 total, 8 up, 8 in 2026-03-09T14:03:05.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:05 vm03 ceph-mon[52586]: osdmap e111: 8 total, 8 up, 8 in 2026-03-09T14:03:05.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:05 vm03 ceph-mon[58994]: osdmap e110: 8 total, 8 up, 8 in 2026-03-09T14:03:05.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:05 vm03 ceph-mon[58994]: osdmap e111: 8 total, 8 up, 8 in 2026-03-09T14:03:06.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:06 vm04 ceph-mon[54203]: pgmap v115: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 323 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:06.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:06 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1000760050' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:06.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:06 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:06.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:06 vm03 ceph-mon[52586]: pgmap v115: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 323 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:06.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:06 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1000760050' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:06.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:06 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:06.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:06 vm03 ceph-mon[58994]: pgmap v115: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 323 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:06.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1000760050' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:06.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:06 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:07.422 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_trunc PASSED [ 30%] 2026-03-09T14:03:07.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:07 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:07.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:07 vm04 ceph-mon[54203]: osdmap e112: 8 total, 8 up, 8 in 2026-03-09T14:03:07.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:07 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:07.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:07 vm03 ceph-mon[52586]: osdmap e112: 8 total, 8 up, 8 in 2026-03-09T14:03:07.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:07 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:07.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:07 vm03 ceph-mon[58994]: osdmap e112: 8 total, 8 up, 8 in 2026-03-09T14:03:08.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:08 vm04 ceph-mon[54203]: pgmap v118: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:08.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:08 vm04 ceph-mon[54203]: osdmap e113: 8 total, 8 up, 8 in 2026-03-09T14:03:08.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:08 vm03 ceph-mon[52586]: pgmap v118: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:08.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:08 vm03 ceph-mon[52586]: osdmap e113: 8 total, 8 up, 8 in 2026-03-09T14:03:08.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:08 vm03 ceph-mon[58994]: pgmap v118: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:08.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:08 vm03 ceph-mon[58994]: osdmap e113: 8 total, 8 up, 8 in 2026-03-09T14:03:09.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:09 vm04 ceph-mon[54203]: osdmap e114: 8 total, 8 up, 8 in 2026-03-09T14:03:09.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:09.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:09 vm04 ceph-mon[54203]: osdmap e115: 8 total, 8 up, 8 in 2026-03-09T14:03:09.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:09 vm03 ceph-mon[52586]: osdmap e114: 8 total, 8 up, 8 in 2026-03-09T14:03:09.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:09.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:09 vm03 ceph-mon[52586]: osdmap e115: 8 total, 8 up, 8 in 2026-03-09T14:03:09.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:09 vm03 ceph-mon[58994]: osdmap e114: 8 total, 8 up, 8 in 2026-03-09T14:03:09.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:09.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:09 vm03 ceph-mon[58994]: osdmap e115: 8 total, 8 up, 8 in 2026-03-09T14:03:10.459 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:03:10 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:03:10.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:10 vm04 ceph-mon[54203]: pgmap v121: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:10.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:10 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1760379636' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:10.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:10 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:10.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:10 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:10.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:10 vm04 ceph-mon[54203]: osdmap e116: 8 total, 8 up, 8 in 2026-03-09T14:03:10.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:10 vm03 ceph-mon[52586]: pgmap v121: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:10.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:10 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1760379636' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:10.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:10 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:10.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:10 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:10.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:10 vm03 ceph-mon[52586]: osdmap e116: 8 total, 8 up, 8 in 2026-03-09T14:03:10.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:10 vm03 ceph-mon[58994]: pgmap v121: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 324 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:10.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:10 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1760379636' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:10.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:10 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:10.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:10 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:10.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:10 vm03 ceph-mon[58994]: osdmap e116: 8 total, 8 up, 8 in 2026-03-09T14:03:11.445 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_cmpext PASSED [ 31%] 2026-03-09T14:03:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:11 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:11 vm04 ceph-mon[54203]: osdmap e117: 8 total, 8 up, 8 in 2026-03-09T14:03:11.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:11 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:11.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:11 vm03 ceph-mon[52586]: osdmap e117: 8 total, 8 up, 8 in 2026-03-09T14:03:11.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:11 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:11.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:11 vm03 ceph-mon[58994]: osdmap e117: 8 total, 8 up, 8 in 2026-03-09T14:03:12.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:12 vm03 ceph-mon[52586]: pgmap v124: 196 pgs: 196 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T14:03:12.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:12 vm03 ceph-mon[58994]: pgmap v124: 196 pgs: 196 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T14:03:12.990 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:12 vm04 ceph-mon[54203]: pgmap v124: 196 pgs: 196 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T14:03:13.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:03:13 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:03:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:03:13.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:13 vm04 ceph-mon[54203]: osdmap e118: 8 total, 8 up, 8 in 2026-03-09T14:03:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:13 vm03 ceph-mon[52586]: osdmap e118: 8 total, 8 up, 8 in 2026-03-09T14:03:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:13 vm03 ceph-mon[58994]: osdmap e118: 8 total, 8 up, 8 in 2026-03-09T14:03:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:14 vm04 ceph-mon[54203]: pgmap v127: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:14 vm04 ceph-mon[54203]: osdmap e119: 8 total, 8 up, 8 in 2026-03-09T14:03:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:14 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3098200582' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:14 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:14 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:14 vm04 ceph-mon[54203]: osdmap e120: 8 total, 8 up, 8 in 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[52586]: pgmap v127: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[52586]: osdmap e119: 8 total, 8 up, 8 in 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3098200582' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[52586]: osdmap e120: 8 total, 8 up, 8 in 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[58994]: pgmap v127: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[58994]: osdmap e119: 8 total, 8 up, 8 in 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3098200582' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:14 vm03 ceph-mon[58994]: osdmap e120: 8 total, 8 up, 8 in 2026-03-09T14:03:15.564 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_objects_empty PASSED [ 32%] 2026-03-09T14:03:16.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:16 vm04 ceph-mon[54203]: pgmap v130: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:16.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:16 vm04 ceph-mon[54203]: osdmap e121: 8 total, 8 up, 8 in 2026-03-09T14:03:17.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:16 vm03 ceph-mon[52586]: pgmap v130: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:17.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:16 vm03 ceph-mon[52586]: osdmap e121: 8 total, 8 up, 8 in 2026-03-09T14:03:17.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:16 vm03 ceph-mon[58994]: pgmap v130: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:17.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:16 vm03 ceph-mon[58994]: osdmap e121: 8 total, 8 up, 8 in 2026-03-09T14:03:17.990 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:17 vm04 ceph-mon[54203]: osdmap e122: 8 total, 8 up, 8 in 2026-03-09T14:03:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:17 vm03 ceph-mon[52586]: osdmap e122: 8 total, 8 up, 8 in 2026-03-09T14:03:18.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:17 vm03 ceph-mon[58994]: osdmap e122: 8 total, 8 up, 8 in 2026-03-09T14:03:18.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:18 vm04 ceph-mon[54203]: pgmap v133: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:18.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:18 vm04 ceph-mon[54203]: osdmap e123: 8 total, 8 up, 8 in 2026-03-09T14:03:18.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:18 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/434180063' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:19.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:18 vm03 ceph-mon[52586]: pgmap v133: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:19.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:18 vm03 ceph-mon[52586]: osdmap e123: 8 total, 8 up, 8 in 2026-03-09T14:03:19.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:18 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/434180063' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:19.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:18 vm03 ceph-mon[58994]: pgmap v133: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:19.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:18 vm03 ceph-mon[58994]: osdmap e123: 8 total, 8 up, 8 in 2026-03-09T14:03:19.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:18 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/434180063' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:19.597 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_read_crc PASSED [ 34%] 2026-03-09T14:03:19.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:19 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/434180063' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:19.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:19 vm04 ceph-mon[54203]: osdmap e124: 8 total, 8 up, 8 in 2026-03-09T14:03:20.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:19 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/434180063' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:20.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:19 vm03 ceph-mon[52586]: osdmap e124: 8 total, 8 up, 8 in 2026-03-09T14:03:20.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:19 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/434180063' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:20.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:19 vm03 ceph-mon[58994]: osdmap e124: 8 total, 8 up, 8 in 2026-03-09T14:03:20.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:03:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:03:20.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:20 vm04 ceph-mon[54203]: pgmap v136: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:20.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:20 vm04 ceph-mon[54203]: osdmap e125: 8 total, 8 up, 8 in 2026-03-09T14:03:21.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:20 vm03 ceph-mon[52586]: pgmap v136: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:21.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:20 vm03 ceph-mon[52586]: osdmap e125: 8 total, 8 up, 8 in 2026-03-09T14:03:21.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:20 vm03 ceph-mon[58994]: pgmap v136: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:21.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:20 vm03 ceph-mon[58994]: osdmap e125: 8 total, 8 up, 8 in 2026-03-09T14:03:21.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:21 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:21.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:21 vm04 ceph-mon[54203]: osdmap e126: 8 total, 8 up, 8 in 2026-03-09T14:03:22.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:21 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:22.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:21 vm03 ceph-mon[52586]: osdmap e126: 8 total, 8 up, 8 in 2026-03-09T14:03:22.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:21 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:22.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:21 vm03 ceph-mon[58994]: osdmap e126: 8 total, 8 up, 8 in 2026-03-09T14:03:22.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:22 vm04 ceph-mon[54203]: pgmap v139: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:22.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:22 vm04 ceph-mon[54203]: osdmap e127: 8 total, 8 up, 8 in 2026-03-09T14:03:22.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:22 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/267935413' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:22.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:22 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:22 vm03 ceph-mon[52586]: pgmap v139: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:22 vm03 ceph-mon[52586]: osdmap e127: 8 total, 8 up, 8 in 2026-03-09T14:03:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:22 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/267935413' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:22 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:23.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:22 vm03 ceph-mon[58994]: pgmap v139: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:23.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:22 vm03 ceph-mon[58994]: osdmap e127: 8 total, 8 up, 8 in 2026-03-09T14:03:23.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:22 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/267935413' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:23.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:22 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:23.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:03:23 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:03:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:03:23.629 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_objects PASSED [ 35%] 2026-03-09T14:03:23.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:23 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:23.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:23 vm04 ceph-mon[54203]: osdmap e128: 8 total, 8 up, 8 in 2026-03-09T14:03:23.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:23 vm04 ceph-mon[54203]: osdmap e129: 8 total, 8 up, 8 in 2026-03-09T14:03:24.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:23 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:24.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:23 vm03 ceph-mon[52586]: osdmap e128: 8 total, 8 up, 8 in 2026-03-09T14:03:24.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:23 vm03 ceph-mon[52586]: osdmap e129: 8 total, 8 up, 8 in 2026-03-09T14:03:24.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:23 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:24.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:23 vm03 ceph-mon[58994]: osdmap e128: 8 total, 8 up, 8 in 2026-03-09T14:03:24.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:23 vm03 ceph-mon[58994]: osdmap e129: 8 total, 8 up, 8 in 2026-03-09T14:03:24.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:24 vm04 ceph-mon[54203]: pgmap v142: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:24.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:24 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:25.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:24 vm03 ceph-mon[52586]: pgmap v142: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:25.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:24 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:25.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:24 vm03 ceph-mon[58994]: pgmap v142: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:25.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:24 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:25.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:25 vm04 ceph-mon[54203]: osdmap e130: 8 total, 8 up, 8 in 2026-03-09T14:03:26.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:25 vm03 ceph-mon[52586]: osdmap e130: 8 total, 8 up, 8 in 2026-03-09T14:03:26.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:25 vm03 ceph-mon[58994]: osdmap e130: 8 total, 8 up, 8 in 2026-03-09T14:03:27.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:26 vm04 ceph-mon[54203]: pgmap v145: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:27.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:26 vm04 ceph-mon[54203]: osdmap e131: 8 total, 8 up, 8 in 2026-03-09T14:03:27.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:26 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2182640391' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:27.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:26 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:27.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:26 vm03 ceph-mon[52586]: pgmap v145: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:27.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:26 vm03 ceph-mon[52586]: osdmap e131: 8 total, 8 up, 8 in 2026-03-09T14:03:27.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:26 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2182640391' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:27.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:26 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:27.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:26 vm03 ceph-mon[58994]: pgmap v145: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 325 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:27.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:26 vm03 ceph-mon[58994]: osdmap e131: 8 total, 8 up, 8 in 2026-03-09T14:03:27.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:26 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2182640391' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:27.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:26 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:27.806 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_ns_objects PASSED [ 36%] 2026-03-09T14:03:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:27 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:27 vm04 ceph-mon[54203]: osdmap e132: 8 total, 8 up, 8 in 2026-03-09T14:03:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:27 vm04 ceph-mon[54203]: pgmap v148: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:27 vm04 ceph-mon[54203]: osdmap e133: 8 total, 8 up, 8 in 2026-03-09T14:03:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:27 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:27 vm03 ceph-mon[52586]: osdmap e132: 8 total, 8 up, 8 in 2026-03-09T14:03:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:27 vm03 ceph-mon[52586]: pgmap v148: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:27 vm03 ceph-mon[52586]: osdmap e133: 8 total, 8 up, 8 in 2026-03-09T14:03:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:27 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:27 vm03 ceph-mon[58994]: osdmap e132: 8 total, 8 up, 8 in 2026-03-09T14:03:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:27 vm03 ceph-mon[58994]: pgmap v148: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:27 vm03 ceph-mon[58994]: osdmap e133: 8 total, 8 up, 8 in 2026-03-09T14:03:30.147 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:29 vm04 ceph-mon[54203]: osdmap e134: 8 total, 8 up, 8 in 2026-03-09T14:03:30.147 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:29 vm04 ceph-mon[54203]: pgmap v151: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:30.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:29 vm03 ceph-mon[52586]: osdmap e134: 8 total, 8 up, 8 in 2026-03-09T14:03:30.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:29 vm03 ceph-mon[52586]: pgmap v151: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:30.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:29 vm03 ceph-mon[58994]: osdmap e134: 8 total, 8 up, 8 in 2026-03-09T14:03:30.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:29 vm03 ceph-mon[58994]: pgmap v151: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:30.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:03:30 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:03:31.194 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:30 vm03 ceph-mon[52586]: osdmap e135: 8 total, 8 up, 8 in 2026-03-09T14:03:31.194 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:30 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3546856846' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:31.194 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:30 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:31.194 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:30 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:31.194 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:30 vm03 ceph-mon[58994]: osdmap e135: 8 total, 8 up, 8 in 2026-03-09T14:03:31.194 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:30 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3546856846' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:31.194 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:30 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:31.194 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:30 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:31.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:30 vm04 ceph-mon[54203]: osdmap e135: 8 total, 8 up, 8 in 2026-03-09T14:03:31.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:30 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3546856846' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:31.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:30 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:31.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:30 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:31.853 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_xattrs PASSED [ 37%] 2026-03-09T14:03:32.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:31 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:32.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:31 vm04 ceph-mon[54203]: osdmap e136: 8 total, 8 up, 8 in 2026-03-09T14:03:32.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:31 vm04 ceph-mon[54203]: pgmap v154: 196 pgs: 196 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T14:03:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:31 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:31 vm03 ceph-mon[52586]: osdmap e136: 8 total, 8 up, 8 in 2026-03-09T14:03:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:31 vm03 ceph-mon[52586]: pgmap v154: 196 pgs: 196 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T14:03:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:31 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:31 vm03 ceph-mon[58994]: osdmap e136: 8 total, 8 up, 8 in 2026-03-09T14:03:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:31 vm03 ceph-mon[58994]: pgmap v154: 196 pgs: 196 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s 2026-03-09T14:03:33.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:32 vm04 ceph-mon[54203]: osdmap e137: 8 total, 8 up, 8 in 2026-03-09T14:03:33.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:03:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:03:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:03:33.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:32 vm03 ceph-mon[52586]: osdmap e137: 8 total, 8 up, 8 in 2026-03-09T14:03:33.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:32 vm03 ceph-mon[58994]: osdmap e137: 8 total, 8 up, 8 in 2026-03-09T14:03:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:33 vm04 ceph-mon[54203]: osdmap e138: 8 total, 8 up, 8 in 2026-03-09T14:03:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:33 vm04 ceph-mon[54203]: pgmap v157: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:33 vm04 ceph-mon[54203]: osdmap e139: 8 total, 8 up, 8 in 2026-03-09T14:03:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:33 vm03 ceph-mon[52586]: osdmap e138: 8 total, 8 up, 8 in 2026-03-09T14:03:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:33 vm03 ceph-mon[52586]: pgmap v157: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:33 vm03 ceph-mon[52586]: osdmap e139: 8 total, 8 up, 8 in 2026-03-09T14:03:34.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:33 vm03 ceph-mon[58994]: osdmap e138: 8 total, 8 up, 8 in 2026-03-09T14:03:34.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:33 vm03 ceph-mon[58994]: pgmap v157: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:34.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:33 vm03 ceph-mon[58994]: osdmap e139: 8 total, 8 up, 8 in 2026-03-09T14:03:35.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:34 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3669237253' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:35.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:34 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:35.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:34 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:35.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:34 vm04 ceph-mon[54203]: osdmap e140: 8 total, 8 up, 8 in 2026-03-09T14:03:35.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:34 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3669237253' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:35.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:34 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:35.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:34 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:35.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:34 vm03 ceph-mon[52586]: osdmap e140: 8 total, 8 up, 8 in 2026-03-09T14:03:35.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:34 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3669237253' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:35.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:34 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:35.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:34 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:35.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:34 vm03 ceph-mon[58994]: osdmap e140: 8 total, 8 up, 8 in 2026-03-09T14:03:35.882 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_obj_xattrs PASSED [ 38%] 2026-03-09T14:03:36.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:35 vm04 ceph-mon[54203]: pgmap v160: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:36.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:35 vm04 ceph-mon[54203]: osdmap e141: 8 total, 8 up, 8 in 2026-03-09T14:03:36.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:35 vm03 ceph-mon[52586]: pgmap v160: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:36.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:35 vm03 ceph-mon[52586]: osdmap e141: 8 total, 8 up, 8 in 2026-03-09T14:03:36.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:35 vm03 ceph-mon[58994]: pgmap v160: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 326 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:36.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:35 vm03 ceph-mon[58994]: osdmap e141: 8 total, 8 up, 8 in 2026-03-09T14:03:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:36 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:03:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:36 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:03:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:36 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:03:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:36 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:03:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:36 vm04 ceph-mon[54203]: osdmap e142: 8 total, 8 up, 8 in 2026-03-09T14:03:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:36 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/165806341' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:37.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:36 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[52586]: osdmap e142: 8 total, 8 up, 8 in 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/165806341' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[58994]: osdmap e142: 8 total, 8 up, 8 in 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/165806341' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:37.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:36 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:38.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:37 vm04 ceph-mon[54203]: pgmap v163: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:38.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:37 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:38.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:37 vm04 ceph-mon[54203]: osdmap e143: 8 total, 8 up, 8 in 2026-03-09T14:03:38.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:37 vm03 ceph-mon[52586]: pgmap v163: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:38.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:37 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:38.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:37 vm03 ceph-mon[52586]: osdmap e143: 8 total, 8 up, 8 in 2026-03-09T14:03:38.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:37 vm03 ceph-mon[58994]: pgmap v163: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:38.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:37 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:38.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:37 vm03 ceph-mon[58994]: osdmap e143: 8 total, 8 up, 8 in 2026-03-09T14:03:38.909 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_pool_id PASSED [ 39%] 2026-03-09T14:03:40.158 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:39 vm04 ceph-mon[54203]: osdmap e144: 8 total, 8 up, 8 in 2026-03-09T14:03:40.158 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:39 vm04 ceph-mon[54203]: pgmap v166: 164 pgs: 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:40.158 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:39 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:39 vm03 ceph-mon[52586]: osdmap e144: 8 total, 8 up, 8 in 2026-03-09T14:03:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:39 vm03 ceph-mon[52586]: pgmap v166: 164 pgs: 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:40.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:39 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:39 vm03 ceph-mon[58994]: osdmap e144: 8 total, 8 up, 8 in 2026-03-09T14:03:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:39 vm03 ceph-mon[58994]: pgmap v166: 164 pgs: 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:40.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:39 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:40.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:03:40 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:03:41.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:40 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:41.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:40 vm04 ceph-mon[54203]: osdmap e145: 8 total, 8 up, 8 in 2026-03-09T14:03:41.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:40 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/775623638' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:41.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:40 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:41.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:40 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:41.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:40 vm03 ceph-mon[52586]: osdmap e145: 8 total, 8 up, 8 in 2026-03-09T14:03:41.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:40 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/775623638' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:41.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:40 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:41.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:40 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:41.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:40 vm03 ceph-mon[58994]: osdmap e145: 8 total, 8 up, 8 in 2026-03-09T14:03:41.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:40 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/775623638' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:41.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:40 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:41.912 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_pool_name PASSED [ 40%] 2026-03-09T14:03:42.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:41 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/775623638' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:42.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:41 vm04 ceph-mon[54203]: osdmap e146: 8 total, 8 up, 8 in 2026-03-09T14:03:42.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:41 vm04 ceph-mon[54203]: pgmap v169: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:42.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:41 vm04 ceph-mon[54203]: osdmap e147: 8 total, 8 up, 8 in 2026-03-09T14:03:42.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:41 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/775623638' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:42.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:41 vm03 ceph-mon[52586]: osdmap e146: 8 total, 8 up, 8 in 2026-03-09T14:03:42.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:41 vm03 ceph-mon[52586]: pgmap v169: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:42.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:41 vm03 ceph-mon[52586]: osdmap e147: 8 total, 8 up, 8 in 2026-03-09T14:03:42.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:41 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/775623638' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:42.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:41 vm03 ceph-mon[58994]: osdmap e146: 8 total, 8 up, 8 in 2026-03-09T14:03:42.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:41 vm03 ceph-mon[58994]: pgmap v169: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:42.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:41 vm03 ceph-mon[58994]: osdmap e147: 8 total, 8 up, 8 in 2026-03-09T14:03:43.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:03:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:03:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:03:44.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:44 vm03 ceph-mon[52586]: osdmap e148: 8 total, 8 up, 8 in 2026-03-09T14:03:44.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:44 vm03 ceph-mon[52586]: pgmap v172: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:44.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:44 vm03 ceph-mon[58994]: osdmap e148: 8 total, 8 up, 8 in 2026-03-09T14:03:44.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:44 vm03 ceph-mon[58994]: pgmap v172: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:44 vm04 ceph-mon[54203]: osdmap e148: 8 total, 8 up, 8 in 2026-03-09T14:03:44.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:44 vm04 ceph-mon[54203]: pgmap v172: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:45.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:45 vm03 ceph-mon[52586]: osdmap e149: 8 total, 8 up, 8 in 2026-03-09T14:03:45.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:45 vm03 ceph-mon[58994]: osdmap e149: 8 total, 8 up, 8 in 2026-03-09T14:03:45.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:45 vm04 ceph-mon[54203]: osdmap e149: 8 total, 8 up, 8 in 2026-03-09T14:03:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:46 vm03 ceph-mon[52586]: pgmap v174: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:46 vm03 ceph-mon[52586]: osdmap e150: 8 total, 8 up, 8 in 2026-03-09T14:03:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:46 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/505506765' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:46 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:46.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:46 vm03 ceph-mon[58994]: pgmap v174: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:46.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:46 vm03 ceph-mon[58994]: osdmap e150: 8 total, 8 up, 8 in 2026-03-09T14:03:46.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:46 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/505506765' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:46.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:46 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:46 vm04 ceph-mon[54203]: pgmap v174: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 327 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:46 vm04 ceph-mon[54203]: osdmap e150: 8 total, 8 up, 8 in 2026-03-09T14:03:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:46 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/505506765' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:46.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:46 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:47.030 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_create_snap PASSED [ 41%] 2026-03-09T14:03:47.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:47 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:47.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:47 vm03 ceph-mon[52586]: osdmap e151: 8 total, 8 up, 8 in 2026-03-09T14:03:47.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:47 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:47.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:47 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:47.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:47 vm03 ceph-mon[58994]: osdmap e151: 8 total, 8 up, 8 in 2026-03-09T14:03:47.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:47 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:47.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:47 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:47.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:47 vm04 ceph-mon[54203]: osdmap e151: 8 total, 8 up, 8 in 2026-03-09T14:03:47.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:47 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:48.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:48 vm03 ceph-mon[52586]: pgmap v177: 196 pgs: 196 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:48.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:48 vm03 ceph-mon[52586]: osdmap e152: 8 total, 8 up, 8 in 2026-03-09T14:03:48.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:48 vm03 ceph-mon[58994]: pgmap v177: 196 pgs: 196 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:48.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:48 vm03 ceph-mon[58994]: osdmap e152: 8 total, 8 up, 8 in 2026-03-09T14:03:48.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:48 vm04 ceph-mon[54203]: pgmap v177: 196 pgs: 196 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:48.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:48 vm04 ceph-mon[54203]: osdmap e152: 8 total, 8 up, 8 in 2026-03-09T14:03:49.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:49 vm04 ceph-mon[54203]: osdmap e153: 8 total, 8 up, 8 in 2026-03-09T14:03:49.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:49 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/153552664' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:49.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:49 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:49 vm03 ceph-mon[52586]: osdmap e153: 8 total, 8 up, 8 in 2026-03-09T14:03:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:49 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/153552664' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:49.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:49 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:49.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:49 vm03 ceph-mon[58994]: osdmap e153: 8 total, 8 up, 8 in 2026-03-09T14:03:49.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:49 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/153552664' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:49.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:49 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:50.261 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_snaps_empty PASSED [ 42%] 2026-03-09T14:03:50.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:03:50 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:03:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:50 vm04 ceph-mon[54203]: pgmap v180: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:50 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:50 vm04 ceph-mon[54203]: osdmap e154: 8 total, 8 up, 8 in 2026-03-09T14:03:50.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:50 vm03 ceph-mon[52586]: pgmap v180: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:50.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:50 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:50.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:50 vm03 ceph-mon[52586]: osdmap e154: 8 total, 8 up, 8 in 2026-03-09T14:03:50.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:50 vm03 ceph-mon[58994]: pgmap v180: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:50.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:50 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:50.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:50 vm03 ceph-mon[58994]: osdmap e154: 8 total, 8 up, 8 in 2026-03-09T14:03:51.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:51 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:51.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:51 vm04 ceph-mon[54203]: osdmap e155: 8 total, 8 up, 8 in 2026-03-09T14:03:51.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:51 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:51.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:51 vm03 ceph-mon[52586]: osdmap e155: 8 total, 8 up, 8 in 2026-03-09T14:03:51.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:51 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:03:51.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:51 vm03 ceph-mon[58994]: osdmap e155: 8 total, 8 up, 8 in 2026-03-09T14:03:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:52 vm04 ceph-mon[54203]: pgmap v183: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:52 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:52.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:52 vm04 ceph-mon[54203]: osdmap e156: 8 total, 8 up, 8 in 2026-03-09T14:03:52.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:52 vm03 ceph-mon[52586]: pgmap v183: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:52.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:52 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:52.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:52 vm03 ceph-mon[52586]: osdmap e156: 8 total, 8 up, 8 in 2026-03-09T14:03:52.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:52 vm03 ceph-mon[58994]: pgmap v183: 164 pgs: 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:52.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:52 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:52.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:52 vm03 ceph-mon[58994]: osdmap e156: 8 total, 8 up, 8 in 2026-03-09T14:03:53.396 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:03:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:03:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:03:53.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:53 vm04 ceph-mon[54203]: osdmap e157: 8 total, 8 up, 8 in 2026-03-09T14:03:53.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:53 vm03 ceph-mon[52586]: osdmap e157: 8 total, 8 up, 8 in 2026-03-09T14:03:53.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:53 vm03 ceph-mon[58994]: osdmap e157: 8 total, 8 up, 8 in 2026-03-09T14:03:54.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:54 vm04 ceph-mon[54203]: pgmap v186: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:54.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:54 vm04 ceph-mon[54203]: osdmap e158: 8 total, 8 up, 8 in 2026-03-09T14:03:54.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:54 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:54.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:54 vm04 ceph-mon[54203]: osdmap e159: 8 total, 8 up, 8 in 2026-03-09T14:03:54.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:54 vm03 ceph-mon[52586]: pgmap v186: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:54.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:54 vm03 ceph-mon[52586]: osdmap e158: 8 total, 8 up, 8 in 2026-03-09T14:03:54.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:54 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:54.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:54 vm03 ceph-mon[52586]: osdmap e159: 8 total, 8 up, 8 in 2026-03-09T14:03:54.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:54 vm03 ceph-mon[58994]: pgmap v186: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:54.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:54 vm03 ceph-mon[58994]: osdmap e158: 8 total, 8 up, 8 in 2026-03-09T14:03:54.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:54 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:03:54.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:54 vm03 ceph-mon[58994]: osdmap e159: 8 total, 8 up, 8 in 2026-03-09T14:03:55.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:55 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/649661153' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:55.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:55 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:55.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:55 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/649661153' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:55.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:55 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:55.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:55 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/649661153' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:55.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:55 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:56.468 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_list_snaps PASSED [ 43%] 2026-03-09T14:03:56.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:56 vm04 ceph-mon[54203]: pgmap v189: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:56.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:56 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:56.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:56 vm04 ceph-mon[54203]: osdmap e160: 8 total, 8 up, 8 in 2026-03-09T14:03:56.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:56 vm03 ceph-mon[52586]: pgmap v189: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:56.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:56 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:56.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:56 vm03 ceph-mon[52586]: osdmap e160: 8 total, 8 up, 8 in 2026-03-09T14:03:56.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:56 vm03 ceph-mon[58994]: pgmap v189: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 328 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:03:56.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:56 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:03:56.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:56 vm03 ceph-mon[58994]: osdmap e160: 8 total, 8 up, 8 in 2026-03-09T14:03:57.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:57 vm03 ceph-mon[52586]: osdmap e161: 8 total, 8 up, 8 in 2026-03-09T14:03:57.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:57 vm03 ceph-mon[58994]: osdmap e161: 8 total, 8 up, 8 in 2026-03-09T14:03:57.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:57 vm04 ceph-mon[54203]: osdmap e161: 8 total, 8 up, 8 in 2026-03-09T14:03:58.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:58 vm03 ceph-mon[52586]: pgmap v192: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:58.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:58 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:58.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:58 vm03 ceph-mon[52586]: osdmap e162: 8 total, 8 up, 8 in 2026-03-09T14:03:58.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:58 vm03 ceph-mon[58994]: pgmap v192: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:58.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:58 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:58.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:58 vm03 ceph-mon[58994]: osdmap e162: 8 total, 8 up, 8 in 2026-03-09T14:03:58.892 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:58 vm04 ceph-mon[54203]: pgmap v192: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:03:58.892 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:58 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:03:58.892 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:58 vm04 ceph-mon[54203]: osdmap e162: 8 total, 8 up, 8 in 2026-03-09T14:03:59.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:59 vm04 ceph-mon[54203]: osdmap e163: 8 total, 8 up, 8 in 2026-03-09T14:03:59.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:59 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/825536535' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:03:59.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:03:59 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:00.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:59 vm03 ceph-mon[52586]: osdmap e163: 8 total, 8 up, 8 in 2026-03-09T14:04:00.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:59 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/825536535' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:00.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:03:59 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:00.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:59 vm03 ceph-mon[58994]: osdmap e163: 8 total, 8 up, 8 in 2026-03-09T14:04:00.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:59 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/825536535' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:00.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:03:59 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:00.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:04:00 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:04:00.572 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_lookup_snap PASSED [ 45%] 2026-03-09T14:04:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:00 vm04 ceph-mon[54203]: pgmap v195: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:00 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:00 vm04 ceph-mon[54203]: osdmap e164: 8 total, 8 up, 8 in 2026-03-09T14:04:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:00 vm03 ceph-mon[52586]: pgmap v195: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:00 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:00 vm03 ceph-mon[52586]: osdmap e164: 8 total, 8 up, 8 in 2026-03-09T14:04:01.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:00 vm03 ceph-mon[58994]: pgmap v195: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:01.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:00 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:01.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:00 vm03 ceph-mon[58994]: osdmap e164: 8 total, 8 up, 8 in 2026-03-09T14:04:01.943 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:01 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:01.944 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:01 vm03 ceph-mon[52586]: osdmap e165: 8 total, 8 up, 8 in 2026-03-09T14:04:01.944 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:01 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:01.944 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:01 vm03 ceph-mon[58994]: osdmap e165: 8 total, 8 up, 8 in 2026-03-09T14:04:01.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:01 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:01.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:01 vm04 ceph-mon[54203]: osdmap e165: 8 total, 8 up, 8 in 2026-03-09T14:04:02.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:02 vm04 ceph-mon[54203]: pgmap v198: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:02.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:02 vm04 ceph-mon[54203]: osdmap e166: 8 total, 8 up, 8 in 2026-03-09T14:04:03.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:02 vm03 ceph-mon[52586]: pgmap v198: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:03.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:02 vm03 ceph-mon[52586]: osdmap e166: 8 total, 8 up, 8 in 2026-03-09T14:04:03.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:02 vm03 ceph-mon[58994]: pgmap v198: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:03.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:02 vm03 ceph-mon[58994]: osdmap e166: 8 total, 8 up, 8 in 2026-03-09T14:04:03.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:04:03 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:04:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:04:03.682 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:03 vm04 ceph-mon[54203]: osdmap e167: 8 total, 8 up, 8 in 2026-03-09T14:04:03.682 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:03 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1509475916' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:03.682 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:03 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1509475916' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:03.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:03 vm04 ceph-mon[54203]: osdmap e168: 8 total, 8 up, 8 in 2026-03-09T14:04:04.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:03 vm03 ceph-mon[52586]: osdmap e167: 8 total, 8 up, 8 in 2026-03-09T14:04:04.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:03 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1509475916' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:04.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:03 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1509475916' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:04.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:03 vm03 ceph-mon[52586]: osdmap e168: 8 total, 8 up, 8 in 2026-03-09T14:04:04.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:03 vm03 ceph-mon[58994]: osdmap e167: 8 total, 8 up, 8 in 2026-03-09T14:04:04.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:03 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1509475916' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:04.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:03 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1509475916' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:04.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:03 vm03 ceph-mon[58994]: osdmap e168: 8 total, 8 up, 8 in 2026-03-09T14:04:04.657 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_timestamp PASSED [ 46%] 2026-03-09T14:04:04.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:04 vm04 ceph-mon[54203]: pgmap v201: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:04.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:04 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:04.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:04 vm04 ceph-mon[54203]: osdmap e169: 8 total, 8 up, 8 in 2026-03-09T14:04:05.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:04 vm03 ceph-mon[52586]: pgmap v201: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:05.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:04 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:05.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:04 vm03 ceph-mon[52586]: osdmap e169: 8 total, 8 up, 8 in 2026-03-09T14:04:05.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:04 vm03 ceph-mon[58994]: pgmap v201: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:05.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:04 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:05.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:04 vm03 ceph-mon[58994]: osdmap e169: 8 total, 8 up, 8 in 2026-03-09T14:04:06.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:06 vm04 ceph-mon[54203]: pgmap v204: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:06.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:06 vm04 ceph-mon[54203]: osdmap e170: 8 total, 8 up, 8 in 2026-03-09T14:04:07.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:06 vm03 ceph-mon[52586]: pgmap v204: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:07.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:06 vm03 ceph-mon[52586]: osdmap e170: 8 total, 8 up, 8 in 2026-03-09T14:04:07.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:06 vm03 ceph-mon[58994]: pgmap v204: 164 pgs: 164 active+clean; 455 KiB data, 329 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:07.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:06 vm03 ceph-mon[58994]: osdmap e170: 8 total, 8 up, 8 in 2026-03-09T14:04:07.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:07 vm04 ceph-mon[54203]: osdmap e171: 8 total, 8 up, 8 in 2026-03-09T14:04:08.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:07 vm03 ceph-mon[52586]: osdmap e171: 8 total, 8 up, 8 in 2026-03-09T14:04:08.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:07 vm03 ceph-mon[58994]: osdmap e171: 8 total, 8 up, 8 in 2026-03-09T14:04:08.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:08 vm04 ceph-mon[54203]: pgmap v207: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:08.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:08 vm04 ceph-mon[54203]: osdmap e172: 8 total, 8 up, 8 in 2026-03-09T14:04:08.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:08 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/342774889' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:08.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:08 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:08.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:08 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:08.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:08 vm04 ceph-mon[54203]: osdmap e173: 8 total, 8 up, 8 in 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[52586]: pgmap v207: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[52586]: osdmap e172: 8 total, 8 up, 8 in 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/342774889' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[52586]: osdmap e173: 8 total, 8 up, 8 in 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[58994]: pgmap v207: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[58994]: osdmap e172: 8 total, 8 up, 8 in 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/342774889' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:09.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:08 vm03 ceph-mon[58994]: osdmap e173: 8 total, 8 up, 8 in 2026-03-09T14:04:09.683 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_remove_snap PASSED [ 47%] 2026-03-09T14:04:09.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:10.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:10.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:10.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:04:10 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:04:10.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:10 vm04 ceph-mon[54203]: pgmap v210: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:10.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:10 vm04 ceph-mon[54203]: osdmap e174: 8 total, 8 up, 8 in 2026-03-09T14:04:11.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:10 vm03 ceph-mon[52586]: pgmap v210: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:11.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:10 vm03 ceph-mon[52586]: osdmap e174: 8 total, 8 up, 8 in 2026-03-09T14:04:11.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:10 vm03 ceph-mon[58994]: pgmap v210: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:11.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:10 vm03 ceph-mon[58994]: osdmap e174: 8 total, 8 up, 8 in 2026-03-09T14:04:11.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:11 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:11.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:11 vm04 ceph-mon[54203]: osdmap e175: 8 total, 8 up, 8 in 2026-03-09T14:04:11.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:11 vm04 ceph-mon[54203]: osdmap e176: 8 total, 8 up, 8 in 2026-03-09T14:04:12.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:11 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:12.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:11 vm03 ceph-mon[52586]: osdmap e175: 8 total, 8 up, 8 in 2026-03-09T14:04:12.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:11 vm03 ceph-mon[52586]: osdmap e176: 8 total, 8 up, 8 in 2026-03-09T14:04:12.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:11 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:12.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:11 vm03 ceph-mon[58994]: osdmap e175: 8 total, 8 up, 8 in 2026-03-09T14:04:12.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:11 vm03 ceph-mon[58994]: osdmap e176: 8 total, 8 up, 8 in 2026-03-09T14:04:13.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:12 vm03 ceph-mon[52586]: pgmap v213: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:13.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:12 vm03 ceph-mon[52586]: osdmap e177: 8 total, 8 up, 8 in 2026-03-09T14:04:13.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:12 vm03 ceph-mon[58994]: pgmap v213: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:13.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:12 vm03 ceph-mon[58994]: osdmap e177: 8 total, 8 up, 8 in 2026-03-09T14:04:13.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:12 vm04 ceph-mon[54203]: pgmap v213: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:13.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:12 vm04 ceph-mon[54203]: osdmap e177: 8 total, 8 up, 8 in 2026-03-09T14:04:13.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:04:13 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:04:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:04:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:14 vm04 ceph-mon[54203]: pgmap v216: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:14 vm04 ceph-mon[54203]: osdmap e178: 8 total, 8 up, 8 in 2026-03-09T14:04:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:14 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3905217943' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:14 vm03 ceph-mon[52586]: pgmap v216: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:14 vm03 ceph-mon[52586]: osdmap e178: 8 total, 8 up, 8 in 2026-03-09T14:04:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:14 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3905217943' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:14 vm03 ceph-mon[58994]: pgmap v216: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:14 vm03 ceph-mon[58994]: osdmap e178: 8 total, 8 up, 8 in 2026-03-09T14:04:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:14 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3905217943' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:15.722 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_rollback PASSED [ 48%] 2026-03-09T14:04:16.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:15 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3905217943' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:16.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:15 vm03 ceph-mon[52586]: osdmap e179: 8 total, 8 up, 8 in 2026-03-09T14:04:16.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:15 vm03 ceph-mon[52586]: osdmap e180: 8 total, 8 up, 8 in 2026-03-09T14:04:16.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:15 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3905217943' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:16.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:15 vm03 ceph-mon[58994]: osdmap e179: 8 total, 8 up, 8 in 2026-03-09T14:04:16.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:15 vm03 ceph-mon[58994]: osdmap e180: 8 total, 8 up, 8 in 2026-03-09T14:04:16.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:15 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3905217943' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:16.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:15 vm04 ceph-mon[54203]: osdmap e179: 8 total, 8 up, 8 in 2026-03-09T14:04:16.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:15 vm04 ceph-mon[54203]: osdmap e180: 8 total, 8 up, 8 in 2026-03-09T14:04:17.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:16 vm03 ceph-mon[52586]: pgmap v219: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:17.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:16 vm03 ceph-mon[52586]: osdmap e181: 8 total, 8 up, 8 in 2026-03-09T14:04:17.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:16 vm03 ceph-mon[58994]: pgmap v219: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:17.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:16 vm03 ceph-mon[58994]: osdmap e181: 8 total, 8 up, 8 in 2026-03-09T14:04:17.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:16 vm04 ceph-mon[54203]: pgmap v219: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 330 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:17.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:16 vm04 ceph-mon[54203]: osdmap e181: 8 total, 8 up, 8 in 2026-03-09T14:04:19.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:18 vm03 ceph-mon[52586]: pgmap v222: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:19.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:18 vm03 ceph-mon[52586]: osdmap e182: 8 total, 8 up, 8 in 2026-03-09T14:04:19.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:18 vm03 ceph-mon[58994]: pgmap v222: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:19.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:18 vm03 ceph-mon[58994]: osdmap e182: 8 total, 8 up, 8 in 2026-03-09T14:04:19.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:18 vm04 ceph-mon[54203]: pgmap v222: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:19.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:18 vm04 ceph-mon[54203]: osdmap e182: 8 total, 8 up, 8 in 2026-03-09T14:04:20.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:19 vm03 ceph-mon[52586]: osdmap e183: 8 total, 8 up, 8 in 2026-03-09T14:04:20.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:19 vm03 ceph-mon[58994]: osdmap e183: 8 total, 8 up, 8 in 2026-03-09T14:04:20.190 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:19 vm04 ceph-mon[54203]: osdmap e183: 8 total, 8 up, 8 in 2026-03-09T14:04:20.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:04:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:04:21.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:20 vm04 ceph-mon[54203]: pgmap v225: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:21.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:20 vm04 ceph-mon[54203]: osdmap e184: 8 total, 8 up, 8 in 2026-03-09T14:04:21.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:20 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3448060913' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:21.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:20 vm03 ceph-mon[52586]: pgmap v225: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:21.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:20 vm03 ceph-mon[52586]: osdmap e184: 8 total, 8 up, 8 in 2026-03-09T14:04:21.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:20 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3448060913' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:21.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:20 vm03 ceph-mon[58994]: pgmap v225: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:21.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:20 vm03 ceph-mon[58994]: osdmap e184: 8 total, 8 up, 8 in 2026-03-09T14:04:21.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:20 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3448060913' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:21.800 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_rollback_removed PASSED [ 49%] 2026-03-09T14:04:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:21 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:21 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3448060913' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:21 vm04 ceph-mon[54203]: osdmap e185: 8 total, 8 up, 8 in 2026-03-09T14:04:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:21 vm04 ceph-mon[54203]: pgmap v228: 196 pgs: 196 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:04:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:21 vm04 ceph-mon[54203]: osdmap e186: 8 total, 8 up, 8 in 2026-03-09T14:04:22.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:21 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:22.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:21 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3448060913' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:22.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:21 vm03 ceph-mon[52586]: osdmap e185: 8 total, 8 up, 8 in 2026-03-09T14:04:22.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:21 vm03 ceph-mon[52586]: pgmap v228: 196 pgs: 196 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:04:22.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:21 vm03 ceph-mon[52586]: osdmap e186: 8 total, 8 up, 8 in 2026-03-09T14:04:22.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:21 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:22.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:21 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3448060913' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:22.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:21 vm03 ceph-mon[58994]: osdmap e185: 8 total, 8 up, 8 in 2026-03-09T14:04:22.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:21 vm03 ceph-mon[58994]: pgmap v228: 196 pgs: 196 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:04:22.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:21 vm03 ceph-mon[58994]: osdmap e186: 8 total, 8 up, 8 in 2026-03-09T14:04:23.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:04:23 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:04:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:04:24.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:23 vm04 ceph-mon[54203]: osdmap e187: 8 total, 8 up, 8 in 2026-03-09T14:04:24.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:23 vm04 ceph-mon[54203]: pgmap v231: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:24.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:23 vm03 ceph-mon[52586]: osdmap e187: 8 total, 8 up, 8 in 2026-03-09T14:04:24.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:23 vm03 ceph-mon[52586]: pgmap v231: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:24.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:23 vm03 ceph-mon[58994]: osdmap e187: 8 total, 8 up, 8 in 2026-03-09T14:04:24.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:23 vm03 ceph-mon[58994]: pgmap v231: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:25.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:24 vm04 ceph-mon[54203]: osdmap e188: 8 total, 8 up, 8 in 2026-03-09T14:04:25.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:24 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:25.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:24 vm03 ceph-mon[52586]: osdmap e188: 8 total, 8 up, 8 in 2026-03-09T14:04:25.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:24 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:25.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:24 vm03 ceph-mon[58994]: osdmap e188: 8 total, 8 up, 8 in 2026-03-09T14:04:25.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:24 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:26.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:25 vm04 ceph-mon[54203]: osdmap e189: 8 total, 8 up, 8 in 2026-03-09T14:04:26.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:25 vm04 ceph-mon[54203]: pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:26.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:25 vm03 ceph-mon[52586]: osdmap e189: 8 total, 8 up, 8 in 2026-03-09T14:04:26.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:25 vm03 ceph-mon[52586]: pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:26.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:25 vm03 ceph-mon[58994]: osdmap e189: 8 total, 8 up, 8 in 2026-03-09T14:04:26.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:25 vm03 ceph-mon[58994]: pgmap v234: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 331 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:27.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:26 vm04 ceph-mon[54203]: osdmap e190: 8 total, 8 up, 8 in 2026-03-09T14:04:27.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:26 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4175473386' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:27.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:26 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:27.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:26 vm03 ceph-mon[52586]: osdmap e190: 8 total, 8 up, 8 in 2026-03-09T14:04:27.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:26 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4175473386' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:27.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:26 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:27.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:26 vm03 ceph-mon[58994]: osdmap e190: 8 total, 8 up, 8 in 2026-03-09T14:04:27.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:26 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4175473386' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:27.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:26 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:27.901 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_snap_read PASSED [ 50%] 2026-03-09T14:04:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:27 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:27 vm04 ceph-mon[54203]: osdmap e191: 8 total, 8 up, 8 in 2026-03-09T14:04:28.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:27 vm04 ceph-mon[54203]: pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 350 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:04:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:27 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:27 vm03 ceph-mon[52586]: osdmap e191: 8 total, 8 up, 8 in 2026-03-09T14:04:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:27 vm03 ceph-mon[52586]: pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 350 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:04:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:27 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:27 vm03 ceph-mon[58994]: osdmap e191: 8 total, 8 up, 8 in 2026-03-09T14:04:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:27 vm03 ceph-mon[58994]: pgmap v237: 196 pgs: 196 active+clean; 455 KiB data, 350 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:04:29.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:28 vm04 ceph-mon[54203]: osdmap e192: 8 total, 8 up, 8 in 2026-03-09T14:04:29.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:28 vm03 ceph-mon[52586]: osdmap e192: 8 total, 8 up, 8 in 2026-03-09T14:04:29.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:28 vm03 ceph-mon[58994]: osdmap e192: 8 total, 8 up, 8 in 2026-03-09T14:04:30.198 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:29 vm04 ceph-mon[54203]: osdmap e193: 8 total, 8 up, 8 in 2026-03-09T14:04:30.198 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:29 vm04 ceph-mon[54203]: pgmap v240: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 350 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:30.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:29 vm03 ceph-mon[52586]: osdmap e193: 8 total, 8 up, 8 in 2026-03-09T14:04:30.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:29 vm03 ceph-mon[52586]: pgmap v240: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 350 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:30.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:29 vm03 ceph-mon[58994]: osdmap e193: 8 total, 8 up, 8 in 2026-03-09T14:04:30.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:29 vm03 ceph-mon[58994]: pgmap v240: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 350 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:30.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:04:30 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:04:31.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:30 vm04 ceph-mon[54203]: osdmap e194: 8 total, 8 up, 8 in 2026-03-09T14:04:31.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:30 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4011238372' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:31.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:30 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:31.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:30 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4011238372' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:31.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:30 vm04 ceph-mon[54203]: osdmap e195: 8 total, 8 up, 8 in 2026-03-09T14:04:31.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:30 vm03 ceph-mon[52586]: osdmap e194: 8 total, 8 up, 8 in 2026-03-09T14:04:31.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:30 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4011238372' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:31.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:30 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:31.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:30 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4011238372' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:31.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:30 vm03 ceph-mon[52586]: osdmap e195: 8 total, 8 up, 8 in 2026-03-09T14:04:31.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:30 vm03 ceph-mon[58994]: osdmap e194: 8 total, 8 up, 8 in 2026-03-09T14:04:31.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:30 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4011238372' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:31.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:30 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:31.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:30 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4011238372' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:31.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:30 vm03 ceph-mon[58994]: osdmap e195: 8 total, 8 up, 8 in 2026-03-09T14:04:31.940 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_set_omap PASSED [ 51%] 2026-03-09T14:04:32.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:31 vm04 ceph-mon[54203]: pgmap v243: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:04:32.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:31 vm04 ceph-mon[54203]: osdmap e196: 8 total, 8 up, 8 in 2026-03-09T14:04:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:31 vm03 ceph-mon[52586]: pgmap v243: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:04:32.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:31 vm03 ceph-mon[52586]: osdmap e196: 8 total, 8 up, 8 in 2026-03-09T14:04:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:31 vm03 ceph-mon[58994]: pgmap v243: 196 pgs: 196 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-09T14:04:32.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:31 vm03 ceph-mon[58994]: osdmap e196: 8 total, 8 up, 8 in 2026-03-09T14:04:33.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:04:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:04:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:04:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:33 vm04 ceph-mon[54203]: osdmap e197: 8 total, 8 up, 8 in 2026-03-09T14:04:34.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:33 vm04 ceph-mon[54203]: pgmap v246: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:33 vm03 ceph-mon[52586]: osdmap e197: 8 total, 8 up, 8 in 2026-03-09T14:04:34.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:33 vm03 ceph-mon[52586]: pgmap v246: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:34.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:33 vm03 ceph-mon[58994]: osdmap e197: 8 total, 8 up, 8 in 2026-03-09T14:04:34.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:33 vm03 ceph-mon[58994]: pgmap v246: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:35.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:34 vm04 ceph-mon[54203]: osdmap e198: 8 total, 8 up, 8 in 2026-03-09T14:04:35.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:34 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1187000216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:35.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:34 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:35.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:34 vm03 ceph-mon[52586]: osdmap e198: 8 total, 8 up, 8 in 2026-03-09T14:04:35.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:34 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1187000216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:35.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:34 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:35.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:34 vm03 ceph-mon[58994]: osdmap e198: 8 total, 8 up, 8 in 2026-03-09T14:04:35.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:34 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1187000216' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:35.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:34 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:35.973 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_set_omap_aio PASSED [ 52%] 2026-03-09T14:04:36.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:35 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:36.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:35 vm04 ceph-mon[54203]: osdmap e199: 8 total, 8 up, 8 in 2026-03-09T14:04:36.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:35 vm04 ceph-mon[54203]: pgmap v249: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:36.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:35 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:36.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:35 vm03 ceph-mon[52586]: osdmap e199: 8 total, 8 up, 8 in 2026-03-09T14:04:36.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:35 vm03 ceph-mon[52586]: pgmap v249: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:36.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:35 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:36.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:35 vm03 ceph-mon[58994]: osdmap e199: 8 total, 8 up, 8 in 2026-03-09T14:04:36.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:35 vm03 ceph-mon[58994]: pgmap v249: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 403 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:37.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:37 vm04 ceph-mon[54203]: osdmap e200: 8 total, 8 up, 8 in 2026-03-09T14:04:37.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:37 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:04:37.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:37 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:04:37.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:37 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:04:37.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:37 vm03 ceph-mon[52586]: osdmap e200: 8 total, 8 up, 8 in 2026-03-09T14:04:37.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:37 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:04:37.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:37 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:04:37.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:37 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:04:37.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:37 vm03 ceph-mon[58994]: osdmap e200: 8 total, 8 up, 8 in 2026-03-09T14:04:37.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:37 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:04:37.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:37 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:04:37.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:37 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:04:38.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:38 vm04 ceph-mon[54203]: pgmap v251: 164 pgs: 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:38.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:38 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:04:38.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:38 vm04 ceph-mon[54203]: osdmap e201: 8 total, 8 up, 8 in 2026-03-09T14:04:38.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:38 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:38.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:38 vm03 ceph-mon[52586]: pgmap v251: 164 pgs: 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:38.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:38 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:04:38.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:38 vm03 ceph-mon[52586]: osdmap e201: 8 total, 8 up, 8 in 2026-03-09T14:04:38.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:38 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:38.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:38 vm03 ceph-mon[58994]: pgmap v251: 164 pgs: 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:38.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:38 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:04:38.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:38 vm03 ceph-mon[58994]: osdmap e201: 8 total, 8 up, 8 in 2026-03-09T14:04:38.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:38 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:39.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:39 vm04 ceph-mon[54203]: osdmap e202: 8 total, 8 up, 8 in 2026-03-09T14:04:39.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:39 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1729627192' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:39.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:39 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:39.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:39 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:39.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:39 vm04 ceph-mon[54203]: osdmap e203: 8 total, 8 up, 8 in 2026-03-09T14:04:39.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:39 vm03 ceph-mon[52586]: osdmap e202: 8 total, 8 up, 8 in 2026-03-09T14:04:39.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:39 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1729627192' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:39.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:39 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:39.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:39 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:39.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:39 vm03 ceph-mon[52586]: osdmap e203: 8 total, 8 up, 8 in 2026-03-09T14:04:39.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:39 vm03 ceph-mon[58994]: osdmap e202: 8 total, 8 up, 8 in 2026-03-09T14:04:39.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:39 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1729627192' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:39.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:39 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:39.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:39 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:39.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:39 vm03 ceph-mon[58994]: osdmap e203: 8 total, 8 up, 8 in 2026-03-09T14:04:40.177 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_write_ops PASSED [ 53%] 2026-03-09T14:04:40.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:04:40 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:04:40.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:40 vm04 ceph-mon[54203]: pgmap v254: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:40.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:40 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:40.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:40 vm03 ceph-mon[52586]: pgmap v254: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:40.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:40 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:40.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:40 vm03 ceph-mon[58994]: pgmap v254: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 407 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:40.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:40 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:41.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:41 vm04 ceph-mon[54203]: osdmap e204: 8 total, 8 up, 8 in 2026-03-09T14:04:41.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:41 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:41.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:41 vm03 ceph-mon[52586]: osdmap e204: 8 total, 8 up, 8 in 2026-03-09T14:04:41.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:41 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:41.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:41 vm03 ceph-mon[58994]: osdmap e204: 8 total, 8 up, 8 in 2026-03-09T14:04:41.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:41 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:42.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:42 vm04 ceph-mon[54203]: pgmap v257: 164 pgs: 164 active+clean; 455 KiB data, 412 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:42.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:42 vm04 ceph-mon[54203]: osdmap e205: 8 total, 8 up, 8 in 2026-03-09T14:04:42.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:42 vm03 ceph-mon[52586]: pgmap v257: 164 pgs: 164 active+clean; 455 KiB data, 412 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:42.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:42 vm03 ceph-mon[52586]: osdmap e205: 8 total, 8 up, 8 in 2026-03-09T14:04:42.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:42 vm03 ceph-mon[58994]: pgmap v257: 164 pgs: 164 active+clean; 455 KiB data, 412 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:42.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:42 vm03 ceph-mon[58994]: osdmap e205: 8 total, 8 up, 8 in 2026-03-09T14:04:43.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:43 vm04 ceph-mon[54203]: osdmap e206: 8 total, 8 up, 8 in 2026-03-09T14:04:43.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:43 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/905524787' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:43.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:43 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:43.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:04:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:04:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:04:43.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:43 vm03 ceph-mon[52586]: osdmap e206: 8 total, 8 up, 8 in 2026-03-09T14:04:43.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:43 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/905524787' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:43.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:43 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:43.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:43 vm03 ceph-mon[58994]: osdmap e206: 8 total, 8 up, 8 in 2026-03-09T14:04:43.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:43 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/905524787' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:43.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:43 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:44.242 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_execute_op PASSED [ 54%] 2026-03-09T14:04:44.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:44 vm03 ceph-mon[52586]: pgmap v260: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 412 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:44.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:44 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:44.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:44 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:44.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:44 vm03 ceph-mon[52586]: osdmap e207: 8 total, 8 up, 8 in 2026-03-09T14:04:44.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:44 vm03 ceph-mon[58994]: pgmap v260: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 412 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:44.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:44 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:44.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:44 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:44.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:44 vm03 ceph-mon[58994]: osdmap e207: 8 total, 8 up, 8 in 2026-03-09T14:04:44.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:44 vm04 ceph-mon[54203]: pgmap v260: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 412 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:44.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:44 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:44.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:44 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:44.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:44 vm04 ceph-mon[54203]: osdmap e207: 8 total, 8 up, 8 in 2026-03-09T14:04:45.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:45 vm03 ceph-mon[52586]: osdmap e208: 8 total, 8 up, 8 in 2026-03-09T14:04:45.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:45 vm03 ceph-mon[58994]: osdmap e208: 8 total, 8 up, 8 in 2026-03-09T14:04:45.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:45 vm04 ceph-mon[54203]: osdmap e208: 8 total, 8 up, 8 in 2026-03-09T14:04:46.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:46 vm04 ceph-mon[54203]: pgmap v263: 164 pgs: 164 active+clean; 455 KiB data, 412 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:46.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:46 vm04 ceph-mon[54203]: osdmap e209: 8 total, 8 up, 8 in 2026-03-09T14:04:46.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:46 vm03 ceph-mon[52586]: pgmap v263: 164 pgs: 164 active+clean; 455 KiB data, 412 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:46.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:46 vm03 ceph-mon[52586]: osdmap e209: 8 total, 8 up, 8 in 2026-03-09T14:04:46.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:46 vm03 ceph-mon[58994]: pgmap v263: 164 pgs: 164 active+clean; 455 KiB data, 412 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:46.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:46 vm03 ceph-mon[58994]: osdmap e209: 8 total, 8 up, 8 in 2026-03-09T14:04:47.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:47 vm04 ceph-mon[54203]: osdmap e210: 8 total, 8 up, 8 in 2026-03-09T14:04:47.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:47 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4090010968' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:47.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:47 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:47.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:47 vm03 ceph-mon[52586]: osdmap e210: 8 total, 8 up, 8 in 2026-03-09T14:04:47.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:47 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4090010968' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:47.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:47 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:47.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:47 vm03 ceph-mon[58994]: osdmap e210: 8 total, 8 up, 8 in 2026-03-09T14:04:47.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:47 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4090010968' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:47.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:47 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:48.380 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_writesame_op PASSED [ 56%] 2026-03-09T14:04:48.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:48 vm04 ceph-mon[54203]: pgmap v266: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 413 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:48.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:48 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:48.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:48 vm04 ceph-mon[54203]: osdmap e211: 8 total, 8 up, 8 in 2026-03-09T14:04:48.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:48 vm03 ceph-mon[52586]: pgmap v266: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 413 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:48.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:48 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:48.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:48 vm03 ceph-mon[52586]: osdmap e211: 8 total, 8 up, 8 in 2026-03-09T14:04:48.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:48 vm03 ceph-mon[58994]: pgmap v266: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 413 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:48.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:48 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:48.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:48 vm03 ceph-mon[58994]: osdmap e211: 8 total, 8 up, 8 in 2026-03-09T14:04:49.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:49 vm04 ceph-mon[54203]: osdmap e212: 8 total, 8 up, 8 in 2026-03-09T14:04:49.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:49 vm03 ceph-mon[52586]: osdmap e212: 8 total, 8 up, 8 in 2026-03-09T14:04:49.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:49 vm03 ceph-mon[58994]: osdmap e212: 8 total, 8 up, 8 in 2026-03-09T14:04:50.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:04:50 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:04:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:50 vm04 ceph-mon[54203]: pgmap v269: 164 pgs: 164 active+clean; 455 KiB data, 413 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:50 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:50.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:50 vm04 ceph-mon[54203]: osdmap e213: 8 total, 8 up, 8 in 2026-03-09T14:04:50.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:50 vm03 ceph-mon[52586]: pgmap v269: 164 pgs: 164 active+clean; 455 KiB data, 413 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:50.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:50 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:50.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:50 vm03 ceph-mon[52586]: osdmap e213: 8 total, 8 up, 8 in 2026-03-09T14:04:50.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:50 vm03 ceph-mon[58994]: pgmap v269: 164 pgs: 164 active+clean; 455 KiB data, 413 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:50.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:50 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:50.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:50 vm03 ceph-mon[58994]: osdmap e213: 8 total, 8 up, 8 in 2026-03-09T14:04:51.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:51 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:51.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:51 vm04 ceph-mon[54203]: osdmap e214: 8 total, 8 up, 8 in 2026-03-09T14:04:51.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:51 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1970968340' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:51.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:51 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:51.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:51 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:51.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:51 vm03 ceph-mon[52586]: osdmap e214: 8 total, 8 up, 8 in 2026-03-09T14:04:51.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:51 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1970968340' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:51.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:51 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:51.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:51 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:04:51.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:51 vm03 ceph-mon[58994]: osdmap e214: 8 total, 8 up, 8 in 2026-03-09T14:04:51.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:51 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1970968340' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:51.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:51 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:52.634 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_omap_vals_by_keys PASSED [ 57%] 2026-03-09T14:04:52.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:52 vm04 ceph-mon[54203]: pgmap v272: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:52.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:52 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:52.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:52 vm04 ceph-mon[54203]: osdmap e215: 8 total, 8 up, 8 in 2026-03-09T14:04:53.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:52 vm03 ceph-mon[52586]: pgmap v272: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:53.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:52 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:53.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:52 vm03 ceph-mon[52586]: osdmap e215: 8 total, 8 up, 8 in 2026-03-09T14:04:53.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:52 vm03 ceph-mon[58994]: pgmap v272: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:53.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:52 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:53.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:52 vm03 ceph-mon[58994]: osdmap e215: 8 total, 8 up, 8 in 2026-03-09T14:04:53.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:04:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:04:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:04:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:53 vm04 ceph-mon[54203]: osdmap e216: 8 total, 8 up, 8 in 2026-03-09T14:04:54.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:53 vm03 ceph-mon[52586]: osdmap e216: 8 total, 8 up, 8 in 2026-03-09T14:04:54.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:53 vm03 ceph-mon[58994]: osdmap e216: 8 total, 8 up, 8 in 2026-03-09T14:04:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:54 vm04 ceph-mon[54203]: pgmap v275: 164 pgs: 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:54 vm04 ceph-mon[54203]: osdmap e217: 8 total, 8 up, 8 in 2026-03-09T14:04:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:54 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:55.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:54 vm03 ceph-mon[52586]: pgmap v275: 164 pgs: 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:55.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:54 vm03 ceph-mon[52586]: osdmap e217: 8 total, 8 up, 8 in 2026-03-09T14:04:55.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:54 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:55.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:54 vm03 ceph-mon[58994]: pgmap v275: 164 pgs: 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:55.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:54 vm03 ceph-mon[58994]: osdmap e217: 8 total, 8 up, 8 in 2026-03-09T14:04:55.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:54 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:04:55.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:55 vm04 ceph-mon[54203]: osdmap e218: 8 total, 8 up, 8 in 2026-03-09T14:04:55.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:55 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1484065423' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:55.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:55 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:56.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:55 vm03 ceph-mon[52586]: osdmap e218: 8 total, 8 up, 8 in 2026-03-09T14:04:56.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:55 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1484065423' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:56.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:55 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:56.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:55 vm03 ceph-mon[58994]: osdmap e218: 8 total, 8 up, 8 in 2026-03-09T14:04:56.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:55 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1484065423' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:56.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:55 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:56.684 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_get_omap_keys PASSED [ 58%] 2026-03-09T14:04:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:56 vm04 ceph-mon[54203]: pgmap v278: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:56 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:56 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:56 vm04 ceph-mon[54203]: osdmap e219: 8 total, 8 up, 8 in 2026-03-09T14:04:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:56 vm03 ceph-mon[52586]: pgmap v278: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:56 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:56 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:56 vm03 ceph-mon[52586]: osdmap e219: 8 total, 8 up, 8 in 2026-03-09T14:04:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:56 vm03 ceph-mon[58994]: pgmap v278: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 421 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:04:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:56 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:04:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:56 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:04:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:56 vm03 ceph-mon[58994]: osdmap e219: 8 total, 8 up, 8 in 2026-03-09T14:04:57.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:57 vm04 ceph-mon[54203]: osdmap e220: 8 total, 8 up, 8 in 2026-03-09T14:04:58.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:57 vm03 ceph-mon[52586]: osdmap e220: 8 total, 8 up, 8 in 2026-03-09T14:04:58.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:57 vm03 ceph-mon[58994]: osdmap e220: 8 total, 8 up, 8 in 2026-03-09T14:04:58.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:58 vm04 ceph-mon[54203]: pgmap v281: 164 pgs: 164 active+clean; 455 KiB data, 426 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:58.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:58 vm04 ceph-mon[54203]: osdmap e221: 8 total, 8 up, 8 in 2026-03-09T14:04:59.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:58 vm03 ceph-mon[52586]: pgmap v281: 164 pgs: 164 active+clean; 455 KiB data, 426 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:59.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:58 vm03 ceph-mon[52586]: osdmap e221: 8 total, 8 up, 8 in 2026-03-09T14:04:59.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:58 vm03 ceph-mon[58994]: pgmap v281: 164 pgs: 164 active+clean; 455 KiB data, 426 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:04:59.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:58 vm03 ceph-mon[58994]: osdmap e221: 8 total, 8 up, 8 in 2026-03-09T14:04:59.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:59 vm04 ceph-mon[54203]: osdmap e222: 8 total, 8 up, 8 in 2026-03-09T14:04:59.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:59 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3236473784' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:04:59.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:04:59 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:00.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:59 vm03 ceph-mon[52586]: osdmap e222: 8 total, 8 up, 8 in 2026-03-09T14:05:00.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:59 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3236473784' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:00.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:04:59 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:00.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:59 vm03 ceph-mon[58994]: osdmap e222: 8 total, 8 up, 8 in 2026-03-09T14:05:00.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:59 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3236473784' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:00.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:04:59 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:00.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:05:00 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:05:00.708 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_clear_omap PASSED [ 59%] 2026-03-09T14:05:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:00 vm04 ceph-mon[54203]: pgmap v284: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 426 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:00 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:00 vm04 ceph-mon[54203]: osdmap e223: 8 total, 8 up, 8 in 2026-03-09T14:05:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:00 vm03 ceph-mon[52586]: pgmap v284: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 426 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:00 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:00 vm03 ceph-mon[52586]: osdmap e223: 8 total, 8 up, 8 in 2026-03-09T14:05:01.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:00 vm03 ceph-mon[58994]: pgmap v284: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 426 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:01.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:00 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:01.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:00 vm03 ceph-mon[58994]: osdmap e223: 8 total, 8 up, 8 in 2026-03-09T14:05:02.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:01 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:02.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:01 vm03 ceph-mon[52586]: osdmap e224: 8 total, 8 up, 8 in 2026-03-09T14:05:02.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:01 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:02.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:01 vm03 ceph-mon[58994]: osdmap e224: 8 total, 8 up, 8 in 2026-03-09T14:05:02.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:01 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:02.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:01 vm04 ceph-mon[54203]: osdmap e224: 8 total, 8 up, 8 in 2026-03-09T14:05:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:02 vm04 ceph-mon[54203]: pgmap v287: 164 pgs: 164 active+clean; 455 KiB data, 430 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:02 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:02 vm04 ceph-mon[54203]: osdmap e225: 8 total, 8 up, 8 in 2026-03-09T14:05:03.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:05:03 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:05:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:05:03.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:02 vm03 ceph-mon[52586]: pgmap v287: 164 pgs: 164 active+clean; 455 KiB data, 430 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:03.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:02 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:03.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:02 vm03 ceph-mon[52586]: osdmap e225: 8 total, 8 up, 8 in 2026-03-09T14:05:03.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:02 vm03 ceph-mon[58994]: pgmap v287: 164 pgs: 164 active+clean; 455 KiB data, 430 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:03.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:02 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:03.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:02 vm03 ceph-mon[58994]: osdmap e225: 8 total, 8 up, 8 in 2026-03-09T14:05:04.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:04 vm04 ceph-mon[54203]: osdmap e226: 8 total, 8 up, 8 in 2026-03-09T14:05:04.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:04 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/539892486' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:04.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:04 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:04.492 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:04 vm04 ceph-mon[54203]: pgmap v290: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 430 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:04 vm03 ceph-mon[52586]: osdmap e226: 8 total, 8 up, 8 in 2026-03-09T14:05:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:04 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/539892486' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:04 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:04 vm03 ceph-mon[52586]: pgmap v290: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 430 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:04.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:04 vm03 ceph-mon[58994]: osdmap e226: 8 total, 8 up, 8 in 2026-03-09T14:05:04.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:04 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/539892486' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:04.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:04 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:04.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:04 vm03 ceph-mon[58994]: pgmap v290: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 430 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:05.328 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_remove_omap_range2 PASSED [ 60%] 2026-03-09T14:05:05.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:05 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:05.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:05 vm03 ceph-mon[52586]: osdmap e227: 8 total, 8 up, 8 in 2026-03-09T14:05:05.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:05 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:05.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:05 vm03 ceph-mon[58994]: osdmap e227: 8 total, 8 up, 8 in 2026-03-09T14:05:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:05 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:05.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:05 vm04 ceph-mon[54203]: osdmap e227: 8 total, 8 up, 8 in 2026-03-09T14:05:06.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:06 vm04 ceph-mon[54203]: pgmap v292: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 430 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:06.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:06 vm04 ceph-mon[54203]: osdmap e228: 8 total, 8 up, 8 in 2026-03-09T14:05:06.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:06 vm04 ceph-mon[54203]: osdmap e229: 8 total, 8 up, 8 in 2026-03-09T14:05:07.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:06 vm03 ceph-mon[52586]: pgmap v292: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 430 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:07.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:06 vm03 ceph-mon[52586]: osdmap e228: 8 total, 8 up, 8 in 2026-03-09T14:05:07.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:06 vm03 ceph-mon[52586]: osdmap e229: 8 total, 8 up, 8 in 2026-03-09T14:05:07.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:06 vm03 ceph-mon[58994]: pgmap v292: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 430 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:07.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:06 vm03 ceph-mon[58994]: osdmap e228: 8 total, 8 up, 8 in 2026-03-09T14:05:07.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:06 vm03 ceph-mon[58994]: osdmap e229: 8 total, 8 up, 8 in 2026-03-09T14:05:08.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:08 vm04 ceph-mon[54203]: pgmap v295: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 431 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:08.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:08 vm04 ceph-mon[54203]: osdmap e230: 8 total, 8 up, 8 in 2026-03-09T14:05:08.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:08 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4094168147' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:08.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:08 vm03 ceph-mon[52586]: pgmap v295: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 431 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:08.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:08 vm03 ceph-mon[52586]: osdmap e230: 8 total, 8 up, 8 in 2026-03-09T14:05:08.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:08 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4094168147' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:08.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:08 vm03 ceph-mon[58994]: pgmap v295: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 431 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:08.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:08 vm03 ceph-mon[58994]: osdmap e230: 8 total, 8 up, 8 in 2026-03-09T14:05:08.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:08 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4094168147' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:09.380 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_omap_cmp PASSED [ 61%] 2026-03-09T14:05:09.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:09 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4094168147' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:09.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:09 vm04 ceph-mon[54203]: osdmap e231: 8 total, 8 up, 8 in 2026-03-09T14:05:09.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:09 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:09.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:09 vm04 ceph-mon[54203]: osdmap e232: 8 total, 8 up, 8 in 2026-03-09T14:05:09.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:09 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4094168147' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:09.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:09 vm03 ceph-mon[52586]: osdmap e231: 8 total, 8 up, 8 in 2026-03-09T14:05:09.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:09 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:09.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:09 vm03 ceph-mon[52586]: osdmap e232: 8 total, 8 up, 8 in 2026-03-09T14:05:09.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:09 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4094168147' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:09.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:09 vm03 ceph-mon[58994]: osdmap e231: 8 total, 8 up, 8 in 2026-03-09T14:05:09.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:09 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:09.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:09 vm03 ceph-mon[58994]: osdmap e232: 8 total, 8 up, 8 in 2026-03-09T14:05:10.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:05:10 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:05:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:10 vm04 ceph-mon[54203]: pgmap v298: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 431 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:10 vm04 ceph-mon[54203]: osdmap e233: 8 total, 8 up, 8 in 2026-03-09T14:05:10.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:10 vm03 ceph-mon[52586]: pgmap v298: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 431 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:10.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:10 vm03 ceph-mon[52586]: osdmap e233: 8 total, 8 up, 8 in 2026-03-09T14:05:10.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:10 vm03 ceph-mon[58994]: pgmap v298: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 431 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:10.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:10 vm03 ceph-mon[58994]: osdmap e233: 8 total, 8 up, 8 in 2026-03-09T14:05:11.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:11 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:11.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:11 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:11.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:11 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:11.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:11 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:11.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:11 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:11.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:11 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:12.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:12 vm04 ceph-mon[54203]: pgmap v301: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:12.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:12 vm04 ceph-mon[54203]: osdmap e234: 8 total, 8 up, 8 in 2026-03-09T14:05:12.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:12 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1608367221' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:12.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:12 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:13.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:12 vm03 ceph-mon[52586]: pgmap v301: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:13.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:12 vm03 ceph-mon[52586]: osdmap e234: 8 total, 8 up, 8 in 2026-03-09T14:05:13.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:12 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1608367221' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:13.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:12 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:13.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:12 vm03 ceph-mon[58994]: pgmap v301: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:13.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:12 vm03 ceph-mon[58994]: osdmap e234: 8 total, 8 up, 8 in 2026-03-09T14:05:13.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:12 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1608367221' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:13.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:12 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:13.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:05:13 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:05:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:05:13.601 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_cmpext_op PASSED [ 62%] 2026-03-09T14:05:13.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:13 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:13.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:13 vm04 ceph-mon[54203]: osdmap e235: 8 total, 8 up, 8 in 2026-03-09T14:05:13.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:13 vm04 ceph-mon[54203]: osdmap e236: 8 total, 8 up, 8 in 2026-03-09T14:05:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:13 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:13 vm03 ceph-mon[52586]: osdmap e235: 8 total, 8 up, 8 in 2026-03-09T14:05:14.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:13 vm03 ceph-mon[52586]: osdmap e236: 8 total, 8 up, 8 in 2026-03-09T14:05:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:13 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:13 vm03 ceph-mon[58994]: osdmap e235: 8 total, 8 up, 8 in 2026-03-09T14:05:14.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:13 vm03 ceph-mon[58994]: osdmap e236: 8 total, 8 up, 8 in 2026-03-09T14:05:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:14 vm04 ceph-mon[54203]: pgmap v304: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:14.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:14 vm04 ceph-mon[54203]: osdmap e237: 8 total, 8 up, 8 in 2026-03-09T14:05:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:14 vm03 ceph-mon[52586]: pgmap v304: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:15.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:14 vm03 ceph-mon[52586]: osdmap e237: 8 total, 8 up, 8 in 2026-03-09T14:05:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:14 vm03 ceph-mon[58994]: pgmap v304: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:15.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:14 vm03 ceph-mon[58994]: osdmap e237: 8 total, 8 up, 8 in 2026-03-09T14:05:16.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:16 vm04 ceph-mon[54203]: pgmap v307: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:16.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:16 vm04 ceph-mon[54203]: osdmap e238: 8 total, 8 up, 8 in 2026-03-09T14:05:16.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:16 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3945657629' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:17.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:16 vm03 ceph-mon[52586]: pgmap v307: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:17.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:16 vm03 ceph-mon[52586]: osdmap e238: 8 total, 8 up, 8 in 2026-03-09T14:05:17.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:16 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3945657629' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:17.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:16 vm03 ceph-mon[58994]: pgmap v307: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 435 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:17.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:16 vm03 ceph-mon[58994]: osdmap e238: 8 total, 8 up, 8 in 2026-03-09T14:05:17.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:16 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3945657629' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:17.671 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_xattrs_op PASSED [ 63%] 2026-03-09T14:05:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:17 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3945657629' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:17.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:17 vm04 ceph-mon[54203]: osdmap e239: 8 total, 8 up, 8 in 2026-03-09T14:05:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:17 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3945657629' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:18.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:17 vm03 ceph-mon[52586]: osdmap e239: 8 total, 8 up, 8 in 2026-03-09T14:05:18.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:17 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3945657629' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:18.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:17 vm03 ceph-mon[58994]: osdmap e239: 8 total, 8 up, 8 in 2026-03-09T14:05:18.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:18 vm04 ceph-mon[54203]: pgmap v310: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 436 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:18.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:18 vm04 ceph-mon[54203]: osdmap e240: 8 total, 8 up, 8 in 2026-03-09T14:05:19.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:18 vm03 ceph-mon[52586]: pgmap v310: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 436 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:19.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:18 vm03 ceph-mon[52586]: osdmap e240: 8 total, 8 up, 8 in 2026-03-09T14:05:19.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:18 vm03 ceph-mon[58994]: pgmap v310: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 436 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:19.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:18 vm03 ceph-mon[58994]: osdmap e240: 8 total, 8 up, 8 in 2026-03-09T14:05:20.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:19 vm03 ceph-mon[52586]: osdmap e241: 8 total, 8 up, 8 in 2026-03-09T14:05:20.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:19 vm03 ceph-mon[58994]: osdmap e241: 8 total, 8 up, 8 in 2026-03-09T14:05:20.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:19 vm04 ceph-mon[54203]: osdmap e241: 8 total, 8 up, 8 in 2026-03-09T14:05:20.741 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:05:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[52586]: pgmap v313: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 436 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[52586]: osdmap e242: 8 total, 8 up, 8 in 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3630695602' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[52586]: osdmap e243: 8 total, 8 up, 8 in 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[58994]: pgmap v313: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 436 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[58994]: osdmap e242: 8 total, 8 up, 8 in 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3630695602' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:21.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:20 vm03 ceph-mon[58994]: osdmap e243: 8 total, 8 up, 8 in 2026-03-09T14:05:21.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:20 vm04 ceph-mon[54203]: pgmap v313: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 436 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:21.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:20 vm04 ceph-mon[54203]: osdmap e242: 8 total, 8 up, 8 in 2026-03-09T14:05:21.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:20 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3630695602' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:21.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:20 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:21.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:20 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:21.242 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:20 vm04 ceph-mon[54203]: osdmap e243: 8 total, 8 up, 8 in 2026-03-09T14:05:21.750 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_locator PASSED [ 64%] 2026-03-09T14:05:22.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:21 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:22.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:21 vm03 ceph-mon[52586]: osdmap e244: 8 total, 8 up, 8 in 2026-03-09T14:05:22.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:21 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:22.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:21 vm03 ceph-mon[58994]: osdmap e244: 8 total, 8 up, 8 in 2026-03-09T14:05:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:21 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:22.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:21 vm04 ceph-mon[54203]: osdmap e244: 8 total, 8 up, 8 in 2026-03-09T14:05:23.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:22 vm03 ceph-mon[52586]: pgmap v316: 196 pgs: 196 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:05:23.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:22 vm03 ceph-mon[58994]: pgmap v316: 196 pgs: 196 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:05:23.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:22 vm04 ceph-mon[54203]: pgmap v316: 196 pgs: 196 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s 2026-03-09T14:05:23.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:05:23 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:05:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:05:24.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:23 vm04 ceph-mon[54203]: osdmap e245: 8 total, 8 up, 8 in 2026-03-09T14:05:24.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:23 vm04 ceph-mon[54203]: osdmap e246: 8 total, 8 up, 8 in 2026-03-09T14:05:24.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:23 vm03 ceph-mon[52586]: osdmap e245: 8 total, 8 up, 8 in 2026-03-09T14:05:24.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:23 vm03 ceph-mon[52586]: osdmap e246: 8 total, 8 up, 8 in 2026-03-09T14:05:24.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:23 vm03 ceph-mon[58994]: osdmap e245: 8 total, 8 up, 8 in 2026-03-09T14:05:24.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:23 vm03 ceph-mon[58994]: osdmap e246: 8 total, 8 up, 8 in 2026-03-09T14:05:25.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:24 vm04 ceph-mon[54203]: pgmap v319: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:25.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:24 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/614570858' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:25.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:24 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:25.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:24 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/614570858' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:25.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:24 vm04 ceph-mon[54203]: osdmap e247: 8 total, 8 up, 8 in 2026-03-09T14:05:25.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:24 vm03 ceph-mon[52586]: pgmap v319: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:25.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:24 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/614570858' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:25.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:24 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:25.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:24 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/614570858' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:25.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:24 vm03 ceph-mon[52586]: osdmap e247: 8 total, 8 up, 8 in 2026-03-09T14:05:25.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:24 vm03 ceph-mon[58994]: pgmap v319: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:25.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:24 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/614570858' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:25.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:24 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:25.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:24 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/614570858' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:25.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:24 vm03 ceph-mon[58994]: osdmap e247: 8 total, 8 up, 8 in 2026-03-09T14:05:25.786 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_operate_aio_write_op PASSED [ 65%] 2026-03-09T14:05:27.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:26 vm03 ceph-mon[52586]: pgmap v322: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:27.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:26 vm03 ceph-mon[52586]: osdmap e248: 8 total, 8 up, 8 in 2026-03-09T14:05:27.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:26 vm03 ceph-mon[58994]: pgmap v322: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:27.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:26 vm03 ceph-mon[58994]: osdmap e248: 8 total, 8 up, 8 in 2026-03-09T14:05:27.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:26 vm04 ceph-mon[54203]: pgmap v322: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 440 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:27.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:26 vm04 ceph-mon[54203]: osdmap e248: 8 total, 8 up, 8 in 2026-03-09T14:05:28.142 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:27 vm04 ceph-mon[54203]: osdmap e249: 8 total, 8 up, 8 in 2026-03-09T14:05:28.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:27 vm03 ceph-mon[52586]: osdmap e249: 8 total, 8 up, 8 in 2026-03-09T14:05:28.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:27 vm03 ceph-mon[58994]: osdmap e249: 8 total, 8 up, 8 in 2026-03-09T14:05:29.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:28 vm04 ceph-mon[54203]: pgmap v325: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:29.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:28 vm04 ceph-mon[54203]: osdmap e250: 8 total, 8 up, 8 in 2026-03-09T14:05:29.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:28 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3010573226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:29.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:28 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:29.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:28 vm03 ceph-mon[52586]: pgmap v325: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:29.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:28 vm03 ceph-mon[52586]: osdmap e250: 8 total, 8 up, 8 in 2026-03-09T14:05:29.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:28 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3010573226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:29.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:28 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:29.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:28 vm03 ceph-mon[58994]: pgmap v325: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:29.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:28 vm03 ceph-mon[58994]: osdmap e250: 8 total, 8 up, 8 in 2026-03-09T14:05:29.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:28 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3010573226' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:29.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:28 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:30.000 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write PASSED [ 67%] 2026-03-09T14:05:30.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:05:30 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:05:30.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:30 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:30.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:30 vm04 ceph-mon[54203]: osdmap e251: 8 total, 8 up, 8 in 2026-03-09T14:05:30.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:30 vm04 ceph-mon[54203]: pgmap v328: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:30 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:30 vm03 ceph-mon[52586]: osdmap e251: 8 total, 8 up, 8 in 2026-03-09T14:05:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:30 vm03 ceph-mon[52586]: pgmap v328: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:30.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:30 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:30.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:30 vm03 ceph-mon[58994]: osdmap e251: 8 total, 8 up, 8 in 2026-03-09T14:05:30.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:30 vm03 ceph-mon[58994]: pgmap v328: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:31.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:31 vm03 ceph-mon[52586]: osdmap e252: 8 total, 8 up, 8 in 2026-03-09T14:05:31.444 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:31 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:31.444 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:31 vm03 ceph-mon[58994]: osdmap e252: 8 total, 8 up, 8 in 2026-03-09T14:05:31.444 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:31 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:31.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:31 vm04 ceph-mon[54203]: osdmap e252: 8 total, 8 up, 8 in 2026-03-09T14:05:31.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:31 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:32 vm04 ceph-mon[54203]: pgmap v330: 164 pgs: 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:32 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:32.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:32 vm04 ceph-mon[54203]: osdmap e253: 8 total, 8 up, 8 in 2026-03-09T14:05:32.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:32 vm03 ceph-mon[52586]: pgmap v330: 164 pgs: 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:32.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:32 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:32.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:32 vm03 ceph-mon[52586]: osdmap e253: 8 total, 8 up, 8 in 2026-03-09T14:05:32.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:32 vm03 ceph-mon[58994]: pgmap v330: 164 pgs: 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:32.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:32 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:32.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:32 vm03 ceph-mon[58994]: osdmap e253: 8 total, 8 up, 8 in 2026-03-09T14:05:33.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:33 vm04 ceph-mon[54203]: osdmap e254: 8 total, 8 up, 8 in 2026-03-09T14:05:33.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:33 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1145585902' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:33.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:33 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:33.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:05:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:05:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:05:33.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:33 vm03 ceph-mon[52586]: osdmap e254: 8 total, 8 up, 8 in 2026-03-09T14:05:33.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:33 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1145585902' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:33.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:33 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:33.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:33 vm03 ceph-mon[58994]: osdmap e254: 8 total, 8 up, 8 in 2026-03-09T14:05:33.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:33 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1145585902' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:33.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:33 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:34.196 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_cmpext PASSED [ 68%] 2026-03-09T14:05:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:34 vm04 ceph-mon[54203]: pgmap v333: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:34 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:34.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:34 vm04 ceph-mon[54203]: osdmap e255: 8 total, 8 up, 8 in 2026-03-09T14:05:34.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:34 vm03 ceph-mon[52586]: pgmap v333: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:34.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:34 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:34.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:34 vm03 ceph-mon[52586]: osdmap e255: 8 total, 8 up, 8 in 2026-03-09T14:05:34.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:34 vm03 ceph-mon[58994]: pgmap v333: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:34.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:34 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:34.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:34 vm03 ceph-mon[58994]: osdmap e255: 8 total, 8 up, 8 in 2026-03-09T14:05:35.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:35 vm04 ceph-mon[54203]: osdmap e256: 8 total, 8 up, 8 in 2026-03-09T14:05:35.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:35 vm03 ceph-mon[52586]: osdmap e256: 8 total, 8 up, 8 in 2026-03-09T14:05:35.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:35 vm03 ceph-mon[58994]: osdmap e256: 8 total, 8 up, 8 in 2026-03-09T14:05:36.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:36 vm04 ceph-mon[54203]: pgmap v336: 164 pgs: 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:36.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:36 vm04 ceph-mon[54203]: osdmap e257: 8 total, 8 up, 8 in 2026-03-09T14:05:36.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:36 vm03 ceph-mon[52586]: pgmap v336: 164 pgs: 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:36.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:36 vm03 ceph-mon[52586]: osdmap e257: 8 total, 8 up, 8 in 2026-03-09T14:05:36.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:36 vm03 ceph-mon[58994]: pgmap v336: 164 pgs: 164 active+clean; 455 KiB data, 441 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:36.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:36 vm03 ceph-mon[58994]: osdmap e257: 8 total, 8 up, 8 in 2026-03-09T14:05:37.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:37 vm03 ceph-mon[52586]: osdmap e258: 8 total, 8 up, 8 in 2026-03-09T14:05:37.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:37 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1944380756' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:37.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:37 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:37.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:37 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:05:37.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:37 vm03 ceph-mon[58994]: osdmap e258: 8 total, 8 up, 8 in 2026-03-09T14:05:37.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:37 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1944380756' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:37.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:37 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:37.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:37 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:05:37.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:37 vm04 ceph-mon[54203]: osdmap e258: 8 total, 8 up, 8 in 2026-03-09T14:05:37.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:37 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1944380756' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:37.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:37 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:37.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:37 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:05:38.297 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_rmxattr PASSED [ 69%] 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[52586]: pgmap v339: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[52586]: osdmap e259: 8 total, 8 up, 8 in 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[58994]: pgmap v339: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[58994]: osdmap e259: 8 total, 8 up, 8 in 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:05:38.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:38 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:05:38.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:38 vm04 ceph-mon[54203]: pgmap v339: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:38.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:38 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:38.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:38 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:38.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:38 vm04 ceph-mon[54203]: osdmap e259: 8 total, 8 up, 8 in 2026-03-09T14:05:38.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:38 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:05:38.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:38 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:05:38.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:38 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:05:39.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:39 vm03 ceph-mon[52586]: osdmap e260: 8 total, 8 up, 8 in 2026-03-09T14:05:39.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:39 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:39.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:39 vm03 ceph-mon[58994]: osdmap e260: 8 total, 8 up, 8 in 2026-03-09T14:05:39.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:39 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:39.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:39 vm04 ceph-mon[54203]: osdmap e260: 8 total, 8 up, 8 in 2026-03-09T14:05:39.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:39 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:40.741 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:05:40 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:05:40.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:40 vm04 ceph-mon[54203]: pgmap v342: 164 pgs: 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:40.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:40 vm04 ceph-mon[54203]: osdmap e261: 8 total, 8 up, 8 in 2026-03-09T14:05:40.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:40 vm03 ceph-mon[52586]: pgmap v342: 164 pgs: 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:40.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:40 vm03 ceph-mon[52586]: osdmap e261: 8 total, 8 up, 8 in 2026-03-09T14:05:40.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:40 vm03 ceph-mon[58994]: pgmap v342: 164 pgs: 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:40.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:40 vm03 ceph-mon[58994]: osdmap e261: 8 total, 8 up, 8 in 2026-03-09T14:05:41.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:41 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:41.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:41 vm04 ceph-mon[54203]: osdmap e262: 8 total, 8 up, 8 in 2026-03-09T14:05:41.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:41 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4163189268' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:41.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:41 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:41.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:41 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:41.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:41 vm03 ceph-mon[52586]: osdmap e262: 8 total, 8 up, 8 in 2026-03-09T14:05:41.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:41 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4163189268' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:41.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:41 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:41.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:41 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:41.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:41 vm03 ceph-mon[58994]: osdmap e262: 8 total, 8 up, 8 in 2026-03-09T14:05:41.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:41 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4163189268' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:41.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:41 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:42.388 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write_no_comp_ref PASSED [ 70%] 2026-03-09T14:05:42.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:42 vm04 ceph-mon[54203]: pgmap v345: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:42.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:42 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:42.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:42 vm04 ceph-mon[54203]: osdmap e263: 8 total, 8 up, 8 in 2026-03-09T14:05:42.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:42 vm03 ceph-mon[52586]: pgmap v345: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:42.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:42 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:42.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:42 vm03 ceph-mon[52586]: osdmap e263: 8 total, 8 up, 8 in 2026-03-09T14:05:42.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:42 vm03 ceph-mon[58994]: pgmap v345: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:42.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:42 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:42.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:42 vm03 ceph-mon[58994]: osdmap e263: 8 total, 8 up, 8 in 2026-03-09T14:05:43.396 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:05:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:05:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:05:43.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:43 vm04 ceph-mon[54203]: osdmap e264: 8 total, 8 up, 8 in 2026-03-09T14:05:43.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:43 vm03 ceph-mon[52586]: osdmap e264: 8 total, 8 up, 8 in 2026-03-09T14:05:43.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:43 vm03 ceph-mon[58994]: osdmap e264: 8 total, 8 up, 8 in 2026-03-09T14:05:44.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:44 vm04 ceph-mon[54203]: pgmap v348: 164 pgs: 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:44.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:44 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:44.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:44 vm04 ceph-mon[54203]: osdmap e265: 8 total, 8 up, 8 in 2026-03-09T14:05:44.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:44 vm03 ceph-mon[52586]: pgmap v348: 164 pgs: 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:44.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:44 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:44.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:44 vm03 ceph-mon[52586]: osdmap e265: 8 total, 8 up, 8 in 2026-03-09T14:05:44.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:44 vm03 ceph-mon[58994]: pgmap v348: 164 pgs: 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:44.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:44 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:44.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:44 vm03 ceph-mon[58994]: osdmap e265: 8 total, 8 up, 8 in 2026-03-09T14:05:45.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:45 vm04 ceph-mon[54203]: osdmap e266: 8 total, 8 up, 8 in 2026-03-09T14:05:45.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:45 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2234198132' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:45.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:45 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:45.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:45 vm03 ceph-mon[52586]: osdmap e266: 8 total, 8 up, 8 in 2026-03-09T14:05:45.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:45 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2234198132' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:45.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:45 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:45.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:45 vm03 ceph-mon[58994]: osdmap e266: 8 total, 8 up, 8 in 2026-03-09T14:05:45.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:45 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2234198132' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:45.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:45 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:46.441 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_append PASSED [ 71%] 2026-03-09T14:05:46.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:46 vm04 ceph-mon[54203]: pgmap v351: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:46.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:46 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:46.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:46 vm04 ceph-mon[54203]: osdmap e267: 8 total, 8 up, 8 in 2026-03-09T14:05:46.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:46 vm03 ceph-mon[52586]: pgmap v351: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:46.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:46 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:46.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:46 vm03 ceph-mon[52586]: osdmap e267: 8 total, 8 up, 8 in 2026-03-09T14:05:46.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:46 vm03 ceph-mon[58994]: pgmap v351: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 442 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:46.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:46 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:46.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:46 vm03 ceph-mon[58994]: osdmap e267: 8 total, 8 up, 8 in 2026-03-09T14:05:47.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:47 vm04 ceph-mon[54203]: osdmap e268: 8 total, 8 up, 8 in 2026-03-09T14:05:47.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:47 vm03 ceph-mon[52586]: osdmap e268: 8 total, 8 up, 8 in 2026-03-09T14:05:47.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:47 vm03 ceph-mon[58994]: osdmap e268: 8 total, 8 up, 8 in 2026-03-09T14:05:48.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:48 vm04 ceph-mon[54203]: pgmap v354: 164 pgs: 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:48.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:48 vm04 ceph-mon[54203]: osdmap e269: 8 total, 8 up, 8 in 2026-03-09T14:05:48.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:48 vm03 ceph-mon[52586]: pgmap v354: 164 pgs: 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:48.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:48 vm03 ceph-mon[52586]: osdmap e269: 8 total, 8 up, 8 in 2026-03-09T14:05:48.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:48 vm03 ceph-mon[58994]: pgmap v354: 164 pgs: 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:48.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:48 vm03 ceph-mon[58994]: osdmap e269: 8 total, 8 up, 8 in 2026-03-09T14:05:49.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:49 vm04 ceph-mon[54203]: osdmap e270: 8 total, 8 up, 8 in 2026-03-09T14:05:49.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:49 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1705255788' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:49.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:49 vm03 ceph-mon[52586]: osdmap e270: 8 total, 8 up, 8 in 2026-03-09T14:05:49.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:49 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1705255788' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:49.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:49 vm03 ceph-mon[58994]: osdmap e270: 8 total, 8 up, 8 in 2026-03-09T14:05:49.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:49 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1705255788' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:50.486 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_write_full PASSED [ 72%] 2026-03-09T14:05:50.741 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:05:50 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:05:50.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:50 vm04 ceph-mon[54203]: pgmap v357: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:50.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:50 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:50.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:50 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1705255788' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:50.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:50 vm04 ceph-mon[54203]: osdmap e271: 8 total, 8 up, 8 in 2026-03-09T14:05:50.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:50 vm03 ceph-mon[52586]: pgmap v357: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:50.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:50 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:50.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:50 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1705255788' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:50.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:50 vm03 ceph-mon[52586]: osdmap e271: 8 total, 8 up, 8 in 2026-03-09T14:05:50.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:50 vm03 ceph-mon[58994]: pgmap v357: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:50.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:50 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:50.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:50 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1705255788' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:50.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:50 vm03 ceph-mon[58994]: osdmap e271: 8 total, 8 up, 8 in 2026-03-09T14:05:51.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:51 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:51.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:51 vm03 ceph-mon[52586]: osdmap e272: 8 total, 8 up, 8 in 2026-03-09T14:05:51.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:51 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:51.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:51 vm03 ceph-mon[58994]: osdmap e272: 8 total, 8 up, 8 in 2026-03-09T14:05:51.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:51 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:05:51.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:51 vm04 ceph-mon[54203]: osdmap e272: 8 total, 8 up, 8 in 2026-03-09T14:05:52.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:52 vm04 ceph-mon[54203]: pgmap v360: 164 pgs: 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:52.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:52 vm04 ceph-mon[54203]: osdmap e273: 8 total, 8 up, 8 in 2026-03-09T14:05:53.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:52 vm03 ceph-mon[52586]: pgmap v360: 164 pgs: 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:53.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:52 vm03 ceph-mon[52586]: osdmap e273: 8 total, 8 up, 8 in 2026-03-09T14:05:53.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:52 vm03 ceph-mon[58994]: pgmap v360: 164 pgs: 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:53.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:52 vm03 ceph-mon[58994]: osdmap e273: 8 total, 8 up, 8 in 2026-03-09T14:05:53.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:05:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:05:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:05:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:53 vm04 ceph-mon[54203]: osdmap e274: 8 total, 8 up, 8 in 2026-03-09T14:05:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:53 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/887330559' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:53 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/887330559' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:53.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:53 vm04 ceph-mon[54203]: osdmap e275: 8 total, 8 up, 8 in 2026-03-09T14:05:54.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:53 vm03 ceph-mon[52586]: osdmap e274: 8 total, 8 up, 8 in 2026-03-09T14:05:54.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:53 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/887330559' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:54.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:53 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/887330559' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:54.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:53 vm03 ceph-mon[52586]: osdmap e275: 8 total, 8 up, 8 in 2026-03-09T14:05:54.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:53 vm03 ceph-mon[58994]: osdmap e274: 8 total, 8 up, 8 in 2026-03-09T14:05:54.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:53 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/887330559' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:54.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:53 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/887330559' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:54.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:53 vm03 ceph-mon[58994]: osdmap e275: 8 total, 8 up, 8 in 2026-03-09T14:05:54.610 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_writesame PASSED [ 73%] 2026-03-09T14:05:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:54 vm04 ceph-mon[54203]: pgmap v363: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:54.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:54 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:55.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:54 vm03 ceph-mon[52586]: pgmap v363: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:55.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:54 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:55.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:54 vm03 ceph-mon[58994]: pgmap v363: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:55.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:54 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:05:55.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:55 vm04 ceph-mon[54203]: osdmap e276: 8 total, 8 up, 8 in 2026-03-09T14:05:56.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:55 vm03 ceph-mon[52586]: osdmap e276: 8 total, 8 up, 8 in 2026-03-09T14:05:56.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:55 vm03 ceph-mon[58994]: osdmap e276: 8 total, 8 up, 8 in 2026-03-09T14:05:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:56 vm04 ceph-mon[54203]: pgmap v366: 164 pgs: 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:56 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:56.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:56 vm04 ceph-mon[54203]: osdmap e277: 8 total, 8 up, 8 in 2026-03-09T14:05:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:56 vm03 ceph-mon[52586]: pgmap v366: 164 pgs: 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:56 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:57.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:56 vm03 ceph-mon[52586]: osdmap e277: 8 total, 8 up, 8 in 2026-03-09T14:05:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:56 vm03 ceph-mon[58994]: pgmap v366: 164 pgs: 164 active+clean; 455 KiB data, 443 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:05:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:56 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:05:57.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:56 vm03 ceph-mon[58994]: osdmap e277: 8 total, 8 up, 8 in 2026-03-09T14:05:57.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:57 vm04 ceph-mon[54203]: osdmap e278: 8 total, 8 up, 8 in 2026-03-09T14:05:57.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:57 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/294765676' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:57.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:57 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:58.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:57 vm03 ceph-mon[52586]: osdmap e278: 8 total, 8 up, 8 in 2026-03-09T14:05:58.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:57 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/294765676' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:58.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:57 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:58.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:57 vm03 ceph-mon[58994]: osdmap e278: 8 total, 8 up, 8 in 2026-03-09T14:05:58.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:57 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/294765676' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:58.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:57 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:05:58.681 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_stat PASSED [ 74%] 2026-03-09T14:05:58.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:58 vm04 ceph-mon[54203]: pgmap v369: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:58.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:58 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:58.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:58 vm04 ceph-mon[54203]: osdmap e279: 8 total, 8 up, 8 in 2026-03-09T14:05:58.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:05:58 vm04 ceph-mon[54203]: osdmap e280: 8 total, 8 up, 8 in 2026-03-09T14:05:59.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:58 vm03 ceph-mon[52586]: pgmap v369: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:59.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:58 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:59.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:58 vm03 ceph-mon[52586]: osdmap e279: 8 total, 8 up, 8 in 2026-03-09T14:05:59.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:05:58 vm03 ceph-mon[52586]: osdmap e280: 8 total, 8 up, 8 in 2026-03-09T14:05:59.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:58 vm03 ceph-mon[58994]: pgmap v369: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:05:59.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:58 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:05:59.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:58 vm03 ceph-mon[58994]: osdmap e279: 8 total, 8 up, 8 in 2026-03-09T14:05:59.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:05:58 vm03 ceph-mon[58994]: osdmap e280: 8 total, 8 up, 8 in 2026-03-09T14:06:00.725 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:06:00 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:06:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:00 vm04 ceph-mon[54203]: pgmap v372: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:00.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:00 vm04 ceph-mon[54203]: osdmap e281: 8 total, 8 up, 8 in 2026-03-09T14:06:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:00 vm03 ceph-mon[52586]: pgmap v372: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:01.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:00 vm03 ceph-mon[52586]: osdmap e281: 8 total, 8 up, 8 in 2026-03-09T14:06:01.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:00 vm03 ceph-mon[58994]: pgmap v372: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:01.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:00 vm03 ceph-mon[58994]: osdmap e281: 8 total, 8 up, 8 in 2026-03-09T14:06:01.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:01 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:01.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:01 vm04 ceph-mon[54203]: osdmap e282: 8 total, 8 up, 8 in 2026-03-09T14:06:01.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:01 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3240511814' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:01.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:01 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:02.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:01 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:02.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:01 vm03 ceph-mon[52586]: osdmap e282: 8 total, 8 up, 8 in 2026-03-09T14:06:02.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:01 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3240511814' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:02.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:01 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:02.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:01 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:02.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:01 vm03 ceph-mon[58994]: osdmap e282: 8 total, 8 up, 8 in 2026-03-09T14:06:02.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:01 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3240511814' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:02.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:01 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:02.749 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_remove PASSED [ 75%] 2026-03-09T14:06:03.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:02 vm03 ceph-mon[52586]: pgmap v375: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:03.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:02 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:03.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:02 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:03.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:02 vm03 ceph-mon[52586]: osdmap e283: 8 total, 8 up, 8 in 2026-03-09T14:06:03.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:02 vm03 ceph-mon[58994]: pgmap v375: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:03.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:02 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:03.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:02 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:03.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:02 vm03 ceph-mon[58994]: osdmap e283: 8 total, 8 up, 8 in 2026-03-09T14:06:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:02 vm04 ceph-mon[54203]: pgmap v375: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:02 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:02 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:03.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:02 vm04 ceph-mon[54203]: osdmap e283: 8 total, 8 up, 8 in 2026-03-09T14:06:03.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:06:03 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:06:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:06:04.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:03 vm03 ceph-mon[52586]: osdmap e284: 8 total, 8 up, 8 in 2026-03-09T14:06:04.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:03 vm03 ceph-mon[58994]: osdmap e284: 8 total, 8 up, 8 in 2026-03-09T14:06:04.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:03 vm04 ceph-mon[54203]: osdmap e284: 8 total, 8 up, 8 in 2026-03-09T14:06:05.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:04 vm04 ceph-mon[54203]: pgmap v378: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:05.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:04 vm04 ceph-mon[54203]: osdmap e285: 8 total, 8 up, 8 in 2026-03-09T14:06:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:04 vm03 ceph-mon[52586]: pgmap v378: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:05.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:04 vm03 ceph-mon[52586]: osdmap e285: 8 total, 8 up, 8 in 2026-03-09T14:06:05.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:04 vm03 ceph-mon[58994]: pgmap v378: 164 pgs: 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:05.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:04 vm03 ceph-mon[58994]: osdmap e285: 8 total, 8 up, 8 in 2026-03-09T14:06:06.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:05 vm04 ceph-mon[54203]: osdmap e286: 8 total, 8 up, 8 in 2026-03-09T14:06:06.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:05 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T14:06:06.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:05 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:06.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:05 vm04 ceph-mon[54203]: pgmap v381: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:06:06.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:05 vm04 ceph-mon[54203]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T14:06:06.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:05 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T14:06:06.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:05 vm04 ceph-mon[54203]: osdmap e287: 8 total, 8 up, 8 in 2026-03-09T14:06:06.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:05 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:05 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a[52582]: 2026-03-09T14:06:05.804+0000 7f227632f640 -1 mon.a@0(leader).osd e287 definitely_dead 0 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[52586]: osdmap e286: 8 total, 8 up, 8 in 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[52586]: pgmap v381: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[52586]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[52586]: osdmap e287: 8 total, 8 up, 8 in 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[58994]: osdmap e286: 8 total, 8 up, 8 in 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[58994]: pgmap v381: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 444 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[58994]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T14:06:06.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T14:06:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[58994]: osdmap e287: 8 total, 8 up, 8 in 2026-03-09T14:06:06.293 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:05 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["4", "0", "7"]}]: dispatch 2026-03-09T14:06:07.106 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:06 vm04 ceph-mon[54203]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T14:06:07.106 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:06 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T14:06:07.106 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:06 vm04 ceph-mon[54203]: osdmap e288: 8 total, 5 up, 8 in 2026-03-09T14:06:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:06 vm03 ceph-mon[52586]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T14:06:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:06 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T14:06:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:06 vm03 ceph-mon[52586]: osdmap e288: 8 total, 5 up, 8 in 2026-03-09T14:06:07.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:06 vm03 ceph-mon[58994]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T14:06:07.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:06 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["4", "0", "7"]}]': finished 2026-03-09T14:06:07.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:06 vm03 ceph-mon[58994]: osdmap e288: 8 total, 5 up, 8 in 2026-03-09T14:06:07.491 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:06:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4[58748]: 2026-03-09T14:06:07.102+0000 7ffb2f940640 -1 osd.4 288 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:07.953 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:06:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:06:07.498+0000 7fc9e46e6640 -1 osd.7 288 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:08.240 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:06:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:06:07.952+0000 7fc9d7ac3640 -1 osd.7 289 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:08.241 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:06:07 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4[58748]: 2026-03-09T14:06:07.947+0000 7ffb2251c640 -1 osd.4 289 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:08.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:07 vm04 ceph-mon[54203]: Monitor daemon marked osd.0 down, but it is still running 2026-03-09T14:06:08.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:07 vm04 ceph-mon[54203]: map e288 wrongly marked me down at e288 2026-03-09T14:06:08.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:07 vm04 ceph-mon[54203]: osd.0 marked itself dead as of e288 2026-03-09T14:06:08.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:07 vm04 ceph-mon[54203]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T14:06:08.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:07 vm04 ceph-mon[54203]: map e288 wrongly marked me down at e288 2026-03-09T14:06:08.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:07 vm04 ceph-mon[54203]: osd.7 marked itself dead as of e288 2026-03-09T14:06:08.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:07 vm04 ceph-mon[54203]: Monitor daemon marked osd.4 down, but it is still running 2026-03-09T14:06:08.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:07 vm04 ceph-mon[54203]: map e288 wrongly marked me down at e288 2026-03-09T14:06:08.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:07 vm04 ceph-mon[54203]: osd.4 marked itself dead as of e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[52586]: Monitor daemon marked osd.0 down, but it is still running 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[52586]: map e288 wrongly marked me down at e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[52586]: osd.0 marked itself dead as of e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[52586]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[52586]: map e288 wrongly marked me down at e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[52586]: osd.7 marked itself dead as of e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[52586]: Monitor daemon marked osd.4 down, but it is still running 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[52586]: map e288 wrongly marked me down at e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[52586]: osd.4 marked itself dead as of e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[58994]: Monitor daemon marked osd.0 down, but it is still running 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[58994]: map e288 wrongly marked me down at e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[58994]: osd.0 marked itself dead as of e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[58994]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[58994]: map e288 wrongly marked me down at e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[58994]: osd.7 marked itself dead as of e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[58994]: Monitor daemon marked osd.4 down, but it is still running 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[58994]: map e288 wrongly marked me down at e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:07 vm03 ceph-mon[58994]: osd.4 marked itself dead as of e288 2026-03-09T14:06:08.292 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:06:07 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0[62907]: 2026-03-09T14:06:07.854+0000 7f97e6c65640 -1 osd.0 288 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:08.293 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:06:07 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0[62907]: 2026-03-09T14:06:07.946+0000 7f97da054640 -1 osd.0 289 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:09.236 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:08 vm03 ceph-mon[52586]: pgmap v384: 196 pgs: 84 stale+active+clean, 112 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T14:06:09.236 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:08 vm03 ceph-mon[52586]: osdmap e289: 8 total, 5 up, 8 in 2026-03-09T14:06:09.237 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:08 vm03 ceph-mon[58994]: pgmap v384: 196 pgs: 84 stale+active+clean, 112 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T14:06:09.237 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:08 vm03 ceph-mon[58994]: osdmap e289: 8 total, 5 up, 8 in 2026-03-09T14:06:09.240 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:08 vm04 ceph-mon[54203]: pgmap v384: 196 pgs: 84 stale+active+clean, 112 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s 2026-03-09T14:06:09.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:08 vm04 ceph-mon[54203]: osdmap e289: 8 total, 5 up, 8 in 2026-03-09T14:06:09.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:06:09 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: 2026-03-09T14:06:09.232+0000 7fd0bfea6640 -1 calc_pg_upmaps abort due to max <= 0 2026-03-09T14:06:10.491 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:06:10 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:06:10.218+0000 7fc9dfcfd640 -1 osd.7 290 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:06:10.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:06:10 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:06:10.491 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:06:10 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4[58748]: 2026-03-09T14:06:10.220+0000 7ffb2a756640 -1 osd.4 290 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: pgmap v386: 196 pgs: 84 stale+active+clean, 112 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 242 B/s wr, 1 op/s 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.6", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.6", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.7", "id": [1, 7]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.d", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.7", "id": [1, 7]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.12", "id": [1, 4]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.d", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.d", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.12", "id": [1, 4]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.14", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.d", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.14", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.15", "id": [2, 7]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.18", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.15", "id": [2, 7]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.19", "id": [3, 7]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.1e", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.18", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.19", "id": [3, 7]}]: dispatch 2026-03-09T14:06:10.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.1e", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.492 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:10 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: pgmap v386: 196 pgs: 84 stale+active+clean, 112 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 242 B/s wr, 1 op/s 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.6", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.6", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.7", "id": [1, 7]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.d", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.7", "id": [1, 7]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.12", "id": [1, 4]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.d", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.d", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.12", "id": [1, 4]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.14", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.d", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.14", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.15", "id": [2, 7]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.18", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.15", "id": [2, 7]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.19", "id": [3, 7]}]: dispatch 2026-03-09T14:06:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.1e", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.18", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.19", "id": [3, 7]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.1e", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:06:10 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0[62907]: 2026-03-09T14:06:10.214+0000 7f97e228e640 -1 osd.0 290 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: pgmap v386: 196 pgs: 84 stale+active+clean, 112 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 242 B/s wr, 1 op/s 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.6", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.6", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.7", "id": [1, 7]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.d", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.7", "id": [1, 7]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.12", "id": [1, 4]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.d", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.d", "id": [5, 0]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.12", "id": [1, 4]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.14", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.d", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.14", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.15", "id": [2, 7]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.18", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.15", "id": [2, 7]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.19", "id": [3, 7]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.1e", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.18", "id": [3, 4]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.19", "id": [3, 7]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.1e", "id": [3, 0]}]: dispatch 2026-03-09T14:06:10.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:10 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:11.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T14:06:11.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.6", "id": [5, 0]}]': finished 2026-03-09T14:06:11.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.7", "id": [1, 7]}]': finished 2026-03-09T14:06:11.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.d", "id": [5, 0]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.12", "id": [1, 4]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.d", "id": [3, 4]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.14", "id": [3, 0]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.15", "id": [2, 7]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.18", "id": [3, 4]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.19", "id": [3, 7]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.1e", "id": [3, 0]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: osdmap e290: 8 total, 5 up, 8 in 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: osd.0 v2:192.168.123.103:6801/2121486584 boot 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: osd.4 v2:192.168.123.104:6800/288742704 boot 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[52586]: osdmap e291: 8 total, 8 up, 8 in 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.6", "id": [5, 0]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.7", "id": [1, 7]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.d", "id": [5, 0]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.12", "id": [1, 4]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.d", "id": [3, 4]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.14", "id": [3, 0]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.15", "id": [2, 7]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.18", "id": [3, 4]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.19", "id": [3, 7]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.1e", "id": [3, 0]}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: osdmap e290: 8 total, 5 up, 8 in 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: osd.0 v2:192.168.123.103:6801/2121486584 boot 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: osd.4 v2:192.168.123.104:6800/288742704 boot 2026-03-09T14:06:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:11 vm03 ceph-mon[58994]: osdmap e291: 8 total, 8 up, 8 in 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.6", "id": [5, 0]}]': finished 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.7", "id": [1, 7]}]': finished 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.d", "id": [5, 0]}]': finished 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "5.12", "id": [1, 4]}]': finished 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.d", "id": [3, 4]}]': finished 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.14", "id": [3, 0]}]': finished 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.15", "id": [2, 7]}]': finished 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.18", "id": [3, 4]}]': finished 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.19", "id": [3, 7]}]': finished 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "67.1e", "id": [3, 0]}]': finished 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: osdmap e290: 8 total, 5 up, 8 in 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: osd.0 v2:192.168.123.103:6801/2121486584 boot 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: osd.4 v2:192.168.123.104:6800/288742704 boot 2026-03-09T14:06:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:11 vm04 ceph-mon[54203]: osdmap e291: 8 total, 8 up, 8 in 2026-03-09T14:06:12.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:12 vm03 ceph-mon[52586]: pgmap v388: 196 pgs: 73 active+undersized, 43 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 14 undersized+degraded+peered, 33 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T14:06:12.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:12 vm03 ceph-mon[52586]: Health check failed: Reduced data availability: 25 pgs inactive (PG_AVAILABILITY) 2026-03-09T14:06:12.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:12 vm03 ceph-mon[52586]: Health check failed: Degraded data redundancy: 216/600 objects degraded (36.000%), 43 pgs degraded (PG_DEGRADED) 2026-03-09T14:06:12.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:12 vm03 ceph-mon[52586]: osdmap e292: 8 total, 8 up, 8 in 2026-03-09T14:06:12.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:12 vm03 ceph-mon[58994]: pgmap v388: 196 pgs: 73 active+undersized, 43 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 14 undersized+degraded+peered, 33 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T14:06:12.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:12 vm03 ceph-mon[58994]: Health check failed: Reduced data availability: 25 pgs inactive (PG_AVAILABILITY) 2026-03-09T14:06:12.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:12 vm03 ceph-mon[58994]: Health check failed: Degraded data redundancy: 216/600 objects degraded (36.000%), 43 pgs degraded (PG_DEGRADED) 2026-03-09T14:06:12.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:12 vm03 ceph-mon[58994]: osdmap e292: 8 total, 8 up, 8 in 2026-03-09T14:06:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:12 vm04 ceph-mon[54203]: pgmap v388: 196 pgs: 73 active+undersized, 43 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 14 undersized+degraded+peered, 33 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T14:06:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:12 vm04 ceph-mon[54203]: Health check failed: Reduced data availability: 25 pgs inactive (PG_AVAILABILITY) 2026-03-09T14:06:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:12 vm04 ceph-mon[54203]: Health check failed: Degraded data redundancy: 216/600 objects degraded (36.000%), 43 pgs degraded (PG_DEGRADED) 2026-03-09T14:06:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:12 vm04 ceph-mon[54203]: osdmap e292: 8 total, 8 up, 8 in 2026-03-09T14:06:13.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:06:13 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:06:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:06:13.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:13 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:13.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:13 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:13.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:13 vm03 ceph-mon[52586]: osdmap e293: 8 total, 8 up, 8 in 2026-03-09T14:06:13.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:13 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:13.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:13 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:13.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:13 vm03 ceph-mon[58994]: osdmap e293: 8 total, 8 up, 8 in 2026-03-09T14:06:13.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:13 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:13.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:13 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1645507066' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:13.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:13 vm04 ceph-mon[54203]: osdmap e293: 8 total, 8 up, 8 in 2026-03-09T14:06:14.143 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete PASSED [ 76%] 2026-03-09T14:06:14.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:14 vm03 ceph-mon[52586]: pgmap v391: 196 pgs: 73 active+undersized, 43 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 14 undersized+degraded+peered, 33 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T14:06:14.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:14 vm03 ceph-mon[52586]: osdmap e294: 8 total, 8 up, 8 in 2026-03-09T14:06:14.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:14 vm03 ceph-mon[58994]: pgmap v391: 196 pgs: 73 active+undersized, 43 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 14 undersized+degraded+peered, 33 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T14:06:14.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:14 vm03 ceph-mon[58994]: osdmap e294: 8 total, 8 up, 8 in 2026-03-09T14:06:14.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:14 vm04 ceph-mon[54203]: pgmap v391: 196 pgs: 73 active+undersized, 43 undersized+peered, 4 stale+active+clean, 29 active+undersized+degraded, 14 undersized+degraded+peered, 33 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 216/600 objects degraded (36.000%) 2026-03-09T14:06:14.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:14 vm04 ceph-mon[54203]: osdmap e294: 8 total, 8 up, 8 in 2026-03-09T14:06:16.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:16 vm04 ceph-mon[54203]: pgmap v394: 164 pgs: 57 active+undersized, 34 undersized+peered, 3 stale+active+clean, 29 active+undersized+degraded, 14 undersized+degraded+peered, 27 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 216/597 objects degraded (36.181%) 2026-03-09T14:06:16.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:16 vm04 ceph-mon[54203]: osdmap e295: 8 total, 8 up, 8 in 2026-03-09T14:06:16.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:16 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:16.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:16 vm03 ceph-mon[52586]: pgmap v394: 164 pgs: 57 active+undersized, 34 undersized+peered, 3 stale+active+clean, 29 active+undersized+degraded, 14 undersized+degraded+peered, 27 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 216/597 objects degraded (36.181%) 2026-03-09T14:06:16.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:16 vm03 ceph-mon[52586]: osdmap e295: 8 total, 8 up, 8 in 2026-03-09T14:06:16.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:16 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:16.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:16 vm03 ceph-mon[58994]: pgmap v394: 164 pgs: 57 active+undersized, 34 undersized+peered, 3 stale+active+clean, 29 active+undersized+degraded, 14 undersized+degraded+peered, 27 active+clean; 455 KiB data, 445 MiB used, 160 GiB / 160 GiB avail; 216/597 objects degraded (36.181%) 2026-03-09T14:06:16.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:16 vm03 ceph-mon[58994]: osdmap e295: 8 total, 8 up, 8 in 2026-03-09T14:06:16.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:16 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:17.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:17 vm04 ceph-mon[54203]: osdmap e296: 8 total, 8 up, 8 in 2026-03-09T14:06:17.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:17 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T14:06:17.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:17 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:17.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:17 vm04 ceph-mon[54203]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T14:06:17.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:17 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T14:06:17.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:17 vm04 ceph-mon[54203]: osdmap e297: 8 total, 8 up, 8 in 2026-03-09T14:06:17.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:17 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:17 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a[52582]: 2026-03-09T14:06:17.194+0000 7f227632f640 -1 mon.a@0(leader).osd e297 definitely_dead 0 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[52586]: osdmap e296: 8 total, 8 up, 8 in 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[52586]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[52586]: osdmap e297: 8 total, 8 up, 8 in 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[58994]: osdmap e296: 8 total, 8 up, 8 in 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "foo", "format": "json"}]: dispatch 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[58994]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[58994]: osdmap e297: 8 total, 8 up, 8 in 2026-03-09T14:06:17.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:17 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["2", "5", "7"]}]: dispatch 2026-03-09T14:06:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:18 vm04 ceph-mon[54203]: pgmap v397: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 81 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:18 vm04 ceph-mon[54203]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 25 pgs inactive) 2026-03-09T14:06:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:18 vm04 ceph-mon[54203]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 216/597 objects degraded (36.181%), 43 pgs degraded) 2026-03-09T14:06:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:18 vm04 ceph-mon[54203]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T14:06:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:18 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T14:06:18.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:18 vm04 ceph-mon[54203]: osdmap e298: 8 total, 5 up, 8 in 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[52586]: pgmap v397: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 81 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[52586]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 25 pgs inactive) 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[52586]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 216/597 objects degraded (36.181%), 43 pgs degraded) 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[52586]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[52586]: osdmap e298: 8 total, 5 up, 8 in 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[58994]: pgmap v397: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 81 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[58994]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 25 pgs inactive) 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[58994]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 216/597 objects degraded (36.181%), 43 pgs degraded) 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[58994]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["2", "5", "7"]}]': finished 2026-03-09T14:06:18.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:18 vm03 ceph-mon[58994]: osdmap e298: 8 total, 5 up, 8 in 2026-03-09T14:06:20.467 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:06:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:06:20.467 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:20 vm04 ceph-mon[54203]: pgmap v400: 196 pgs: 54 stale+active+clean, 32 unknown, 110 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 81 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:20.467 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:20 vm04 ceph-mon[54203]: osdmap e299: 8 total, 5 up, 8 in 2026-03-09T14:06:20.467 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:20 vm04 ceph-mon[54203]: osd.5 marked itself dead as of e299 2026-03-09T14:06:20.467 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:20 vm04 ceph-mon[54203]: osd.7 marked itself dead as of e299 2026-03-09T14:06:20.467 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:20 vm04 ceph-mon[54203]: osd.2 marked itself dead as of e299 2026-03-09T14:06:20.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:20 vm03 ceph-mon[52586]: pgmap v400: 196 pgs: 54 stale+active+clean, 32 unknown, 110 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 81 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:20.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:20 vm03 ceph-mon[52586]: osdmap e299: 8 total, 5 up, 8 in 2026-03-09T14:06:20.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:20 vm03 ceph-mon[52586]: osd.5 marked itself dead as of e299 2026-03-09T14:06:20.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:20 vm03 ceph-mon[52586]: osd.7 marked itself dead as of e299 2026-03-09T14:06:20.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:20 vm03 ceph-mon[52586]: osd.2 marked itself dead as of e299 2026-03-09T14:06:20.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:20 vm03 ceph-mon[58994]: pgmap v400: 196 pgs: 54 stale+active+clean, 32 unknown, 110 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 81 KiB/s rd, 81 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:20.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:20 vm03 ceph-mon[58994]: osdmap e299: 8 total, 5 up, 8 in 2026-03-09T14:06:20.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:20 vm03 ceph-mon[58994]: osd.5 marked itself dead as of e299 2026-03-09T14:06:20.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:20 vm03 ceph-mon[58994]: osd.7 marked itself dead as of e299 2026-03-09T14:06:20.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:20 vm03 ceph-mon[58994]: osd.2 marked itself dead as of e299 2026-03-09T14:06:20.741 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:06:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:06:20.463+0000 7fc9e3ed3640 -1 osd.7 300 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:20.741 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:06:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:06:20.505+0000 7f39b2482640 -1 osd.5 300 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:21.042 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:06:20 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2[73032]: 2026-03-09T14:06:20.624+0000 7f83ed735640 -1 osd.2 300 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[52586]: Monitor daemon marked osd.5 down, but it is still running 2026-03-09T14:06:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[52586]: map e299 wrongly marked me down at e298 2026-03-09T14:06:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[52586]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T14:06:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[52586]: map e299 wrongly marked me down at e298 2026-03-09T14:06:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[52586]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T14:06:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[52586]: map e299 wrongly marked me down at e298 2026-03-09T14:06:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[52586]: osdmap e300: 8 total, 5 up, 8 in 2026-03-09T14:06:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:21.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:21.543 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:06:21 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2[73032]: 2026-03-09T14:06:21.251+0000 7f83e854b640 -1 osd.2 301 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:06:21.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[58994]: Monitor daemon marked osd.5 down, but it is still running 2026-03-09T14:06:21.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[58994]: map e299 wrongly marked me down at e298 2026-03-09T14:06:21.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[58994]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T14:06:21.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[58994]: map e299 wrongly marked me down at e298 2026-03-09T14:06:21.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[58994]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T14:06:21.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[58994]: map e299 wrongly marked me down at e298 2026-03-09T14:06:21.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[58994]: osdmap e300: 8 total, 5 up, 8 in 2026-03-09T14:06:21.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:21.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:21.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:21 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:21.741 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:06:21 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:06:21.259+0000 7fc9dfcfd640 -1 osd.7 301 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:06:21.741 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:06:21 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:06:21.250+0000 7f39ada99640 -1 osd.5 301 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:06:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:21 vm04 ceph-mon[54203]: Monitor daemon marked osd.5 down, but it is still running 2026-03-09T14:06:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:21 vm04 ceph-mon[54203]: map e299 wrongly marked me down at e298 2026-03-09T14:06:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:21 vm04 ceph-mon[54203]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T14:06:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:21 vm04 ceph-mon[54203]: map e299 wrongly marked me down at e298 2026-03-09T14:06:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:21 vm04 ceph-mon[54203]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T14:06:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:21 vm04 ceph-mon[54203]: map e299 wrongly marked me down at e298 2026-03-09T14:06:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:21 vm04 ceph-mon[54203]: osdmap e300: 8 total, 5 up, 8 in 2026-03-09T14:06:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:21 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:21 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:21 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:22.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[52586]: pgmap v403: 196 pgs: 7 undersized+degraded+peered+wait, 29 active+undersized+degraded+wait, 29 undersized+peered+wait, 79 active+undersized+wait, 52 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s; 187/600 objects degraded (31.167%) 2026-03-09T14:06:22.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[52586]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T14:06:22.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[52586]: Health check failed: Degraded data redundancy: 187/600 objects degraded (31.167%), 36 pgs degraded (PG_DEGRADED) 2026-03-09T14:06:22.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[52586]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T14:06:22.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:22.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[52586]: osdmap e301: 8 total, 5 up, 8 in 2026-03-09T14:06:22.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[58994]: pgmap v403: 196 pgs: 7 undersized+degraded+peered+wait, 29 active+undersized+degraded+wait, 29 undersized+peered+wait, 79 active+undersized+wait, 52 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s; 187/600 objects degraded (31.167%) 2026-03-09T14:06:22.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[58994]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T14:06:22.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[58994]: Health check failed: Degraded data redundancy: 187/600 objects degraded (31.167%), 36 pgs degraded (PG_DEGRADED) 2026-03-09T14:06:22.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[58994]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T14:06:22.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:22.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:22 vm03 ceph-mon[58994]: osdmap e301: 8 total, 5 up, 8 in 2026-03-09T14:06:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:22 vm04 ceph-mon[54203]: pgmap v403: 196 pgs: 7 undersized+degraded+peered+wait, 29 active+undersized+degraded+wait, 29 undersized+peered+wait, 79 active+undersized+wait, 52 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s; 187/600 objects degraded (31.167%) 2026-03-09T14:06:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:22 vm04 ceph-mon[54203]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY) 2026-03-09T14:06:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:22 vm04 ceph-mon[54203]: Health check failed: Degraded data redundancy: 187/600 objects degraded (31.167%), 36 pgs degraded (PG_DEGRADED) 2026-03-09T14:06:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:22 vm04 ceph-mon[54203]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T14:06:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:22 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:22 vm04 ceph-mon[54203]: osdmap e301: 8 total, 5 up, 8 in 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:06:23 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:06:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[52586]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[52586]: osd.2 v2:192.168.123.103:6809/872739083 boot 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[52586]: osd.5 v2:192.168.123.104:6804/2731397521 boot 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[52586]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[52586]: osdmap e302: 8 total, 8 up, 8 in 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[58994]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[58994]: osd.2 v2:192.168.123.103:6809/872739083 boot 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[58994]: osd.5 v2:192.168.123.104:6804/2731397521 boot 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[58994]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:06:23.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:23 vm03 ceph-mon[58994]: osdmap e302: 8 total, 8 up, 8 in 2026-03-09T14:06:23.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:23 vm04 ceph-mon[54203]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T14:06:23.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:23 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:06:23.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:23 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-09T14:06:23.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:23 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:06:23.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:23 vm04 ceph-mon[54203]: osd.2 v2:192.168.123.103:6809/872739083 boot 2026-03-09T14:06:23.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:23 vm04 ceph-mon[54203]: osd.5 v2:192.168.123.104:6804/2731397521 boot 2026-03-09T14:06:23.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:23 vm04 ceph-mon[54203]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:06:23.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:23 vm04 ceph-mon[54203]: osdmap e302: 8 total, 8 up, 8 in 2026-03-09T14:06:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:24 vm03 ceph-mon[52586]: pgmap v406: 196 pgs: 16 stale+active+clean, 7 undersized+degraded+peered+wait, 29 active+undersized+degraded+wait, 29 undersized+peered+wait, 79 active+undersized+wait, 36 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s; 187/600 objects degraded (31.167%) 2026-03-09T14:06:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:24 vm03 ceph-mon[52586]: osdmap e303: 8 total, 8 up, 8 in 2026-03-09T14:06:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:24 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:24 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:24.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:24 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:24 vm03 ceph-mon[58994]: pgmap v406: 196 pgs: 16 stale+active+clean, 7 undersized+degraded+peered+wait, 29 active+undersized+degraded+wait, 29 undersized+peered+wait, 79 active+undersized+wait, 36 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s; 187/600 objects degraded (31.167%) 2026-03-09T14:06:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:24 vm03 ceph-mon[58994]: osdmap e303: 8 total, 8 up, 8 in 2026-03-09T14:06:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:24 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:24 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:24.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:24 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:24 vm04 ceph-mon[54203]: pgmap v406: 196 pgs: 16 stale+active+clean, 7 undersized+degraded+peered+wait, 29 active+undersized+degraded+wait, 29 undersized+peered+wait, 79 active+undersized+wait, 36 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 255 B/s wr, 0 op/s; 187/600 objects degraded (31.167%) 2026-03-09T14:06:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:24 vm04 ceph-mon[54203]: osdmap e303: 8 total, 8 up, 8 in 2026-03-09T14:06:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:24 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:24 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:24 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:25.290 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete_and_cb PASSED [ 78%] 2026-03-09T14:06:25.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:25 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:25.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:25 vm04 ceph-mon[54203]: osdmap e304: 8 total, 8 up, 8 in 2026-03-09T14:06:25.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:25 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:25.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:25 vm03 ceph-mon[52586]: osdmap e304: 8 total, 8 up, 8 in 2026-03-09T14:06:25.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:25 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4197746729' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:25.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:25 vm03 ceph-mon[58994]: osdmap e304: 8 total, 8 up, 8 in 2026-03-09T14:06:26.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:26 vm04 ceph-mon[54203]: pgmap v409: 196 pgs: 16 stale+active+clean, 7 undersized+degraded+peered+wait, 29 active+undersized+degraded+wait, 29 undersized+peered+wait, 79 active+undersized+wait, 36 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 187/600 objects degraded (31.167%) 2026-03-09T14:06:26.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:26 vm04 ceph-mon[54203]: osdmap e305: 8 total, 8 up, 8 in 2026-03-09T14:06:26.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:26 vm03 ceph-mon[52586]: pgmap v409: 196 pgs: 16 stale+active+clean, 7 undersized+degraded+peered+wait, 29 active+undersized+degraded+wait, 29 undersized+peered+wait, 79 active+undersized+wait, 36 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 187/600 objects degraded (31.167%) 2026-03-09T14:06:26.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:26 vm03 ceph-mon[52586]: osdmap e305: 8 total, 8 up, 8 in 2026-03-09T14:06:26.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:26 vm03 ceph-mon[58994]: pgmap v409: 196 pgs: 16 stale+active+clean, 7 undersized+degraded+peered+wait, 29 active+undersized+degraded+wait, 29 undersized+peered+wait, 79 active+undersized+wait, 36 active+clean; 455 KiB data, 446 MiB used, 160 GiB / 160 GiB avail; 187/600 objects degraded (31.167%) 2026-03-09T14:06:26.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:26 vm03 ceph-mon[58994]: osdmap e305: 8 total, 8 up, 8 in 2026-03-09T14:06:27.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:27 vm04 ceph-mon[54203]: osdmap e306: 8 total, 8 up, 8 in 2026-03-09T14:06:27.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:27 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T14:06:27.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:27 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:27.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:27 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:27.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:27 vm03 ceph-mon[52586]: osdmap e306: 8 total, 8 up, 8 in 2026-03-09T14:06:27.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:27 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T14:06:27.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:27 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:27.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:27 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:27.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:27 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a[52582]: 2026-03-09T14:06:27.324+0000 7f227632f640 -1 mon.a@0(leader).osd e307 definitely_dead 0 2026-03-09T14:06:27.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:27 vm03 ceph-mon[58994]: osdmap e306: 8 total, 8 up, 8 in 2026-03-09T14:06:27.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:27 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd map", "pool": "test_pool", "object": "bar", "format": "json"}]: dispatch 2026-03-09T14:06:27.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:27 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:27.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:27 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd set", "key": "noup"}]: dispatch 2026-03-09T14:06:28.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:28 vm04 ceph-mon[54203]: pgmap v412: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 89 B/s, 2 objects/s recovering 2026-03-09T14:06:28.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:28 vm04 ceph-mon[54203]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T14:06:28.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:28 vm04 ceph-mon[54203]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 187/600 objects degraded (31.167%), 36 pgs degraded) 2026-03-09T14:06:28.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:28 vm04 ceph-mon[54203]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T14:06:28.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:28 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T14:06:28.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:28 vm04 ceph-mon[54203]: osdmap e307: 8 total, 8 up, 8 in 2026-03-09T14:06:28.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:28 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T14:06:28.642 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:28 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[52586]: pgmap v412: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 89 B/s, 2 objects/s recovering 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[52586]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[52586]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 187/600 objects degraded (31.167%), 36 pgs degraded) 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[52586]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[52586]: osdmap e307: 8 total, 8 up, 8 in 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[58994]: pgmap v412: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 89 B/s, 2 objects/s recovering 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[58994]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive) 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[58994]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 187/600 objects degraded (31.167%), 36 pgs degraded) 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[58994]: Health check failed: noup flag(s) set (OSDMAP_FLAGS) 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd set", "key": "noup"}]': finished 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[58994]: osdmap e307: 8 total, 8 up, 8 in 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T14:06:28.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:28 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd down", "ids": ["1", "7", "2"]}]: dispatch 2026-03-09T14:06:29.418 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:06:29 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1[67912]: 2026-03-09T14:06:29.092+0000 7f6fc5e4e640 -1 osd.1 308 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:29.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:29 vm04 ceph-mon[54203]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T14:06:29.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:29 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T14:06:29.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:29 vm04 ceph-mon[54203]: osdmap e308: 8 total, 5 up, 8 in 2026-03-09T14:06:29.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:29 vm04 ceph-mon[54203]: Monitor daemon marked osd.1 down, but it is still running 2026-03-09T14:06:29.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:29 vm04 ceph-mon[54203]: map e308 wrongly marked me down at e308 2026-03-09T14:06:29.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:29 vm04 ceph-mon[54203]: osd.1 marked itself dead as of e308 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[52586]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[52586]: osdmap e308: 8 total, 5 up, 8 in 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[52586]: Monitor daemon marked osd.1 down, but it is still running 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[52586]: map e308 wrongly marked me down at e308 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[52586]: osd.1 marked itself dead as of e308 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[58994]: Health check failed: 3 osds down (OSD_DOWN) 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd down", "ids": ["1", "7", "2"]}]': finished 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[58994]: osdmap e308: 8 total, 5 up, 8 in 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[58994]: Monitor daemon marked osd.1 down, but it is still running 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[58994]: map e308 wrongly marked me down at e308 2026-03-09T14:06:29.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:29 vm03 ceph-mon[58994]: osd.1 marked itself dead as of e308 2026-03-09T14:06:29.793 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:06:29 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1[67912]: 2026-03-09T14:06:29.409+0000 7f6fb9a3e640 -1 osd.1 309 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:30 vm03 ceph-mon[52586]: pgmap v415: 196 pgs: 49 stale+active+clean, 32 unknown, 115 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 89 B/s, 2 objects/s recovering 2026-03-09T14:06:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:30 vm03 ceph-mon[52586]: osdmap e309: 8 total, 5 up, 8 in 2026-03-09T14:06:30.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:30 vm03 ceph-mon[52586]: osd.2 marked itself dead as of e309 2026-03-09T14:06:30.542 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:06:30 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1[67912]: 2026-03-09T14:06:30.429+0000 7f6fb9a3e640 -1 osd.1 310 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:30.542 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:06:30 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-rgw-foo-a[82425]: 2026-03-09T14:06:30.215+0000 7f92d5d34640 -1 rgw watcher librados: RGWWatcher::handle_error cookie 94427682835712 err (110) Connection timed out 2026-03-09T14:06:30.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:30 vm03 ceph-mon[58994]: pgmap v415: 196 pgs: 49 stale+active+clean, 32 unknown, 115 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 89 B/s, 2 objects/s recovering 2026-03-09T14:06:30.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:30 vm03 ceph-mon[58994]: osdmap e309: 8 total, 5 up, 8 in 2026-03-09T14:06:30.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:30 vm03 ceph-mon[58994]: osd.2 marked itself dead as of e309 2026-03-09T14:06:30.741 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:06:30 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:06:30.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:30 vm04 ceph-mon[54203]: pgmap v415: 196 pgs: 49 stale+active+clean, 32 unknown, 115 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 89 B/s, 2 objects/s recovering 2026-03-09T14:06:30.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:30 vm04 ceph-mon[54203]: osdmap e309: 8 total, 5 up, 8 in 2026-03-09T14:06:30.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:30 vm04 ceph-mon[54203]: osd.2 marked itself dead as of e309 2026-03-09T14:06:31.042 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:06:30 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2[73032]: 2026-03-09T14:06:30.701+0000 7f83ecf22640 -1 osd.2 310 osdmap NOUP flag is set, waiting for it to clear 2026-03-09T14:06:31.740 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:06:31 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:06:31.477+0000 7fc9dfcfd640 -1 osd.7 311 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:06:31.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:31 vm04 ceph-mon[54203]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T14:06:31.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:31 vm04 ceph-mon[54203]: map e309 wrongly marked me down at e308 2026-03-09T14:06:31.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:31 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:31.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:31 vm04 ceph-mon[54203]: osdmap e310: 8 total, 5 up, 8 in 2026-03-09T14:06:31.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:31 vm04 ceph-mon[54203]: osd.7 marked itself dead as of e310 2026-03-09T14:06:31.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:31 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:31.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:31 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[52586]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[52586]: map e309 wrongly marked me down at e308 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[52586]: osdmap e310: 8 total, 5 up, 8 in 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[52586]: osd.7 marked itself dead as of e310 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:31.792 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:06:31 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2[73032]: 2026-03-09T14:06:31.446+0000 7f83e854b640 -1 osd.2 311 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:06:31.792 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:06:31 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1[67912]: 2026-03-09T14:06:31.449+0000 7f6fc1c78640 -1 osd.1 311 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[58994]: Monitor daemon marked osd.2 down, but it is still running 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[58994]: map e309 wrongly marked me down at e308 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[58994]: osdmap e310: 8 total, 5 up, 8 in 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[58994]: osd.7 marked itself dead as of e310 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:31.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:31 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[52586]: pgmap v418: 196 pgs: 32 active+undersized, 12 undersized+degraded+peered+wait, 17 active+undersized+degraded+wait, 1 unknown, 47 active+undersized+wait, 36 undersized+peered+wait, 15 active+undersized+degraded, 36 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 227/597 objects degraded (38.023%) 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[52586]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[52586]: map e310 wrongly marked me down at e308 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[52586]: Health check failed: Degraded data redundancy: 227/597 objects degraded (38.023%), 44 pgs degraded (PG_DEGRADED) 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[52586]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[52586]: osdmap e311: 8 total, 5 up, 8 in 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[58994]: pgmap v418: 196 pgs: 32 active+undersized, 12 undersized+degraded+peered+wait, 17 active+undersized+degraded+wait, 1 unknown, 47 active+undersized+wait, 36 undersized+peered+wait, 15 active+undersized+degraded, 36 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 227/597 objects degraded (38.023%) 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[58994]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[58994]: map e310 wrongly marked me down at e308 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[58994]: Health check failed: Degraded data redundancy: 227/597 objects degraded (38.023%), 44 pgs degraded (PG_DEGRADED) 2026-03-09T14:06:32.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[58994]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T14:06:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:32.793 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:32 vm03 ceph-mon[58994]: osdmap e311: 8 total, 5 up, 8 in 2026-03-09T14:06:32.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:32 vm04 ceph-mon[54203]: pgmap v418: 196 pgs: 32 active+undersized, 12 undersized+degraded+peered+wait, 17 active+undersized+degraded+wait, 1 unknown, 47 active+undersized+wait, 36 undersized+peered+wait, 15 active+undersized+degraded, 36 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 227/597 objects degraded (38.023%) 2026-03-09T14:06:32.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:32 vm04 ceph-mon[54203]: Monitor daemon marked osd.7 down, but it is still running 2026-03-09T14:06:32.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:32 vm04 ceph-mon[54203]: map e310 wrongly marked me down at e308 2026-03-09T14:06:32.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:32 vm04 ceph-mon[54203]: Health check failed: Degraded data redundancy: 227/597 objects degraded (38.023%), 44 pgs degraded (PG_DEGRADED) 2026-03-09T14:06:32.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:32 vm04 ceph-mon[54203]: Health check cleared: OSDMAP_FLAGS (was: noup flag(s) set) 2026-03-09T14:06:32.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:32 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:32.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:32 vm04 ceph-mon[54203]: osdmap e311: 8 total, 5 up, 8 in 2026-03-09T14:06:33.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:06:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:06:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:06:33.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:33 vm04 ceph-mon[54203]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T14:06:33.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:33 vm04 ceph-mon[54203]: osd.1 v2:192.168.123.103:6805/4232373287 boot 2026-03-09T14:06:33.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:33 vm04 ceph-mon[54203]: osd.2 v2:192.168.123.103:6809/872739083 boot 2026-03-09T14:06:33.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:33 vm04 ceph-mon[54203]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:06:33.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:33 vm04 ceph-mon[54203]: osdmap e312: 8 total, 8 up, 8 in 2026-03-09T14:06:33.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:33 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:06:33.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:33 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:06:33.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:33 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[52586]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[52586]: osd.1 v2:192.168.123.103:6805/4232373287 boot 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[52586]: osd.2 v2:192.168.123.103:6809/872739083 boot 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[52586]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[52586]: osdmap e312: 8 total, 8 up, 8 in 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[58994]: Health check cleared: OSD_DOWN (was: 3 osds down) 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[58994]: osd.1 v2:192.168.123.103:6805/4232373287 boot 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[58994]: osd.2 v2:192.168.123.103:6809/872739083 boot 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[58994]: osd.7 v2:192.168.123.104:6812/3000381118 boot 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[58994]: osdmap e312: 8 total, 8 up, 8 in 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T14:06:34.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:33 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-09T14:06:34.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:34 vm04 ceph-mon[54203]: pgmap v421: 196 pgs: 5 stale+active+clean, 32 active+undersized, 12 undersized+degraded+peered+wait, 17 active+undersized+degraded+wait, 1 unknown, 47 active+undersized+wait, 36 undersized+peered+wait, 15 active+undersized+degraded, 31 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 227/597 objects degraded (38.023%) 2026-03-09T14:06:34.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:34 vm04 ceph-mon[54203]: osdmap e313: 8 total, 8 up, 8 in 2026-03-09T14:06:35.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:34 vm03 ceph-mon[52586]: pgmap v421: 196 pgs: 5 stale+active+clean, 32 active+undersized, 12 undersized+degraded+peered+wait, 17 active+undersized+degraded+wait, 1 unknown, 47 active+undersized+wait, 36 undersized+peered+wait, 15 active+undersized+degraded, 31 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 227/597 objects degraded (38.023%) 2026-03-09T14:06:35.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:34 vm03 ceph-mon[52586]: osdmap e313: 8 total, 8 up, 8 in 2026-03-09T14:06:35.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:34 vm03 ceph-mon[58994]: pgmap v421: 196 pgs: 5 stale+active+clean, 32 active+undersized, 12 undersized+degraded+peered+wait, 17 active+undersized+degraded+wait, 1 unknown, 47 active+undersized+wait, 36 undersized+peered+wait, 15 active+undersized+degraded, 31 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 227/597 objects degraded (38.023%) 2026-03-09T14:06:35.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:34 vm03 ceph-mon[58994]: osdmap e313: 8 total, 8 up, 8 in 2026-03-09T14:06:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:35 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:35 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:35 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:35 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:36.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:35 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/681624277' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:36.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:35 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:36.735 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_read_wait_for_complete_and_cb_error PASSED [ 79%] 2026-03-09T14:06:36.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:36 vm04 ceph-mon[54203]: pgmap v423: 196 pgs: 5 stale+active+clean, 32 active+undersized, 12 undersized+degraded+peered+wait, 17 active+undersized+degraded+wait, 1 unknown, 47 active+undersized+wait, 36 undersized+peered+wait, 15 active+undersized+degraded, 31 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 227/597 objects degraded (38.023%) 2026-03-09T14:06:36.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:36 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:36.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:36 vm04 ceph-mon[54203]: osdmap e314: 8 total, 8 up, 8 in 2026-03-09T14:06:37.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:36 vm03 ceph-mon[52586]: pgmap v423: 196 pgs: 5 stale+active+clean, 32 active+undersized, 12 undersized+degraded+peered+wait, 17 active+undersized+degraded+wait, 1 unknown, 47 active+undersized+wait, 36 undersized+peered+wait, 15 active+undersized+degraded, 31 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 227/597 objects degraded (38.023%) 2026-03-09T14:06:37.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:36 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:37.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:36 vm03 ceph-mon[52586]: osdmap e314: 8 total, 8 up, 8 in 2026-03-09T14:06:37.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:36 vm03 ceph-mon[58994]: pgmap v423: 196 pgs: 5 stale+active+clean, 32 active+undersized, 12 undersized+degraded+peered+wait, 17 active+undersized+degraded+wait, 1 unknown, 47 active+undersized+wait, 36 undersized+peered+wait, 15 active+undersized+degraded, 31 active+clean; 455 KiB data, 455 MiB used, 160 GiB / 160 GiB avail; 227/597 objects degraded (38.023%) 2026-03-09T14:06:37.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:36 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:37.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:36 vm03 ceph-mon[58994]: osdmap e314: 8 total, 8 up, 8 in 2026-03-09T14:06:38.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:37 vm03 ceph-mon[52586]: osdmap e315: 8 total, 8 up, 8 in 2026-03-09T14:06:38.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:37 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:06:38.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:37 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:06:38.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:37 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:06:38.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:37 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:38.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:37 vm03 ceph-mon[58994]: osdmap e315: 8 total, 8 up, 8 in 2026-03-09T14:06:38.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:37 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:06:38.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:37 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:06:38.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:37 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:06:38.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:37 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:38.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:37 vm04 ceph-mon[54203]: osdmap e315: 8 total, 8 up, 8 in 2026-03-09T14:06:38.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:37 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:06:38.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:37 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:06:38.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:37 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:06:38.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:37 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:38 vm03 ceph-mon[52586]: pgmap v426: 164 pgs: 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:38 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:38 vm03 ceph-mon[52586]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 227/597 objects degraded (38.023%), 44 pgs degraded) 2026-03-09T14:06:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:38 vm03 ceph-mon[52586]: osdmap e316: 8 total, 8 up, 8 in 2026-03-09T14:06:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:38 vm03 ceph-mon[58994]: pgmap v426: 164 pgs: 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:38 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:38 vm03 ceph-mon[58994]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 227/597 objects degraded (38.023%), 44 pgs degraded) 2026-03-09T14:06:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:38 vm03 ceph-mon[58994]: osdmap e316: 8 total, 8 up, 8 in 2026-03-09T14:06:39.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:38 vm04 ceph-mon[54203]: pgmap v426: 164 pgs: 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:39.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:38 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:39.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:38 vm04 ceph-mon[54203]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 227/597 objects degraded (38.023%), 44 pgs degraded) 2026-03-09T14:06:39.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:38 vm04 ceph-mon[54203]: osdmap e316: 8 total, 8 up, 8 in 2026-03-09T14:06:40.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:39 vm03 ceph-mon[52586]: osdmap e317: 8 total, 8 up, 8 in 2026-03-09T14:06:40.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:39 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3778793306' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:40.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:39 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:40.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:39 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:40.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:39 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:40.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:39 vm03 ceph-mon[58994]: osdmap e317: 8 total, 8 up, 8 in 2026-03-09T14:06:40.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:39 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3778793306' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:40.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:39 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:40.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:39 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:40.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:39 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:39 vm04 ceph-mon[54203]: osdmap e317: 8 total, 8 up, 8 in 2026-03-09T14:06:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:39 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3778793306' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:39 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:39 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:06:40.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:39 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:40.741 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:06:40 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:06:40.778 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_lock PASSED [ 80%] 2026-03-09T14:06:41.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:40 vm03 ceph-mon[52586]: pgmap v429: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:41.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:40 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:41.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:40 vm03 ceph-mon[52586]: osdmap e318: 8 total, 8 up, 8 in 2026-03-09T14:06:41.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:40 vm03 ceph-mon[58994]: pgmap v429: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:41.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:40 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:41.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:40 vm03 ceph-mon[58994]: osdmap e318: 8 total, 8 up, 8 in 2026-03-09T14:06:41.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:40 vm04 ceph-mon[54203]: pgmap v429: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1 op/s; 0 B/s, 0 objects/s recovering 2026-03-09T14:06:41.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:40 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:41.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:40 vm04 ceph-mon[54203]: osdmap e318: 8 total, 8 up, 8 in 2026-03-09T14:06:42.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:41 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:42.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:41 vm03 ceph-mon[52586]: osdmap e319: 8 total, 8 up, 8 in 2026-03-09T14:06:42.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:41 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:42.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:41 vm03 ceph-mon[58994]: osdmap e319: 8 total, 8 up, 8 in 2026-03-09T14:06:42.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:41 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:42.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:41 vm04 ceph-mon[54203]: osdmap e319: 8 total, 8 up, 8 in 2026-03-09T14:06:43.107 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:42 vm03 ceph-mon[52586]: pgmap v432: 164 pgs: 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:43.107 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:42 vm03 ceph-mon[52586]: osdmap e320: 8 total, 8 up, 8 in 2026-03-09T14:06:43.107 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:42 vm03 ceph-mon[58994]: pgmap v432: 164 pgs: 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:43.107 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:42 vm03 ceph-mon[58994]: osdmap e320: 8 total, 8 up, 8 in 2026-03-09T14:06:43.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:42 vm04 ceph-mon[54203]: pgmap v432: 164 pgs: 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:43.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:42 vm04 ceph-mon[54203]: osdmap e320: 8 total, 8 up, 8 in 2026-03-09T14:06:43.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:06:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:06:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:06:44.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:43 vm04 ceph-mon[54203]: osdmap e321: 8 total, 8 up, 8 in 2026-03-09T14:06:44.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:43 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4201088542' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:44.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:43 vm03 ceph-mon[52586]: osdmap e321: 8 total, 8 up, 8 in 2026-03-09T14:06:44.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:43 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4201088542' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:44.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:43 vm03 ceph-mon[58994]: osdmap e321: 8 total, 8 up, 8 in 2026-03-09T14:06:44.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:43 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4201088542' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:44.815 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_execute PASSED [ 81%] 2026-03-09T14:06:45.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:44 vm04 ceph-mon[54203]: pgmap v435: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:45.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:44 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:45.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:44 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/4201088542' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:45.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:44 vm04 ceph-mon[54203]: osdmap e322: 8 total, 8 up, 8 in 2026-03-09T14:06:45.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:44 vm03 ceph-mon[52586]: pgmap v435: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:45.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:44 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:45.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:44 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/4201088542' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:45.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:44 vm03 ceph-mon[52586]: osdmap e322: 8 total, 8 up, 8 in 2026-03-09T14:06:45.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:44 vm03 ceph-mon[58994]: pgmap v435: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:45.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:44 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:45.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:44 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/4201088542' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:45.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:44 vm03 ceph-mon[58994]: osdmap e322: 8 total, 8 up, 8 in 2026-03-09T14:06:46.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:45 vm04 ceph-mon[54203]: osdmap e323: 8 total, 8 up, 8 in 2026-03-09T14:06:46.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:45 vm03 ceph-mon[52586]: osdmap e323: 8 total, 8 up, 8 in 2026-03-09T14:06:46.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:45 vm03 ceph-mon[58994]: osdmap e323: 8 total, 8 up, 8 in 2026-03-09T14:06:47.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:46 vm04 ceph-mon[54203]: pgmap v438: 164 pgs: 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:06:47.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:46 vm04 ceph-mon[54203]: osdmap e324: 8 total, 8 up, 8 in 2026-03-09T14:06:47.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:46 vm03 ceph-mon[52586]: pgmap v438: 164 pgs: 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:06:47.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:46 vm03 ceph-mon[52586]: osdmap e324: 8 total, 8 up, 8 in 2026-03-09T14:06:47.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:46 vm03 ceph-mon[58994]: pgmap v438: 164 pgs: 164 active+clean; 455 KiB data, 464 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:06:47.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:46 vm03 ceph-mon[58994]: osdmap e324: 8 total, 8 up, 8 in 2026-03-09T14:06:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:47 vm04 ceph-mon[54203]: osdmap e325: 8 total, 8 up, 8 in 2026-03-09T14:06:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:47 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/903227585' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:48.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:47 vm04 ceph-mon[54203]: pgmap v441: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 465 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:48.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:47 vm03 ceph-mon[52586]: osdmap e325: 8 total, 8 up, 8 in 2026-03-09T14:06:48.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:47 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/903227585' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:48.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:47 vm03 ceph-mon[52586]: pgmap v441: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 465 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:48.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:47 vm03 ceph-mon[58994]: osdmap e325: 8 total, 8 up, 8 in 2026-03-09T14:06:48.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:47 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/903227585' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:48.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:47 vm03 ceph-mon[58994]: pgmap v441: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 465 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:48.860 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_execute PASSED [ 82%] 2026-03-09T14:06:49.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:48 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/903227585' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:49.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:48 vm04 ceph-mon[54203]: osdmap e326: 8 total, 8 up, 8 in 2026-03-09T14:06:49.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:48 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/903227585' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:49.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:48 vm03 ceph-mon[52586]: osdmap e326: 8 total, 8 up, 8 in 2026-03-09T14:06:49.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:48 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/903227585' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:49.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:48 vm03 ceph-mon[58994]: osdmap e326: 8 total, 8 up, 8 in 2026-03-09T14:06:50.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:49 vm04 ceph-mon[54203]: osdmap e327: 8 total, 8 up, 8 in 2026-03-09T14:06:50.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:49 vm04 ceph-mon[54203]: pgmap v444: 164 pgs: 164 active+clean; 455 KiB data, 465 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:50.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:49 vm03 ceph-mon[52586]: osdmap e327: 8 total, 8 up, 8 in 2026-03-09T14:06:50.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:49 vm03 ceph-mon[52586]: pgmap v444: 164 pgs: 164 active+clean; 455 KiB data, 465 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:50.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:49 vm03 ceph-mon[58994]: osdmap e327: 8 total, 8 up, 8 in 2026-03-09T14:06:50.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:49 vm03 ceph-mon[58994]: pgmap v444: 164 pgs: 164 active+clean; 455 KiB data, 465 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:50.741 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:06:50 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:06:51.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:50 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:51.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:50 vm04 ceph-mon[54203]: osdmap e328: 8 total, 8 up, 8 in 2026-03-09T14:06:51.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:50 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:51.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:50 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:51.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:50 vm03 ceph-mon[52586]: osdmap e328: 8 total, 8 up, 8 in 2026-03-09T14:06:51.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:50 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:51.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:50 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:06:51.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:50 vm03 ceph-mon[58994]: osdmap e328: 8 total, 8 up, 8 in 2026-03-09T14:06:51.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:50 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:06:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:51 vm04 ceph-mon[54203]: osdmap e329: 8 total, 8 up, 8 in 2026-03-09T14:06:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:51 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1913610228' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:51 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:52.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:51 vm04 ceph-mon[54203]: pgmap v447: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:52.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:51 vm03 ceph-mon[52586]: osdmap e329: 8 total, 8 up, 8 in 2026-03-09T14:06:52.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:51 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1913610228' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:52.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:51 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:52.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:51 vm03 ceph-mon[52586]: pgmap v447: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:52.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:51 vm03 ceph-mon[58994]: osdmap e329: 8 total, 8 up, 8 in 2026-03-09T14:06:52.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:51 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1913610228' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:52.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:51 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:06:52.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:51 vm03 ceph-mon[58994]: pgmap v447: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:52.904 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_aio_setxattr PASSED [ 83%] 2026-03-09T14:06:53.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:52 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:53.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:52 vm04 ceph-mon[54203]: osdmap e330: 8 total, 8 up, 8 in 2026-03-09T14:06:53.292 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:06:53 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:06:53] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:06:53.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:52 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:53.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:52 vm03 ceph-mon[52586]: osdmap e330: 8 total, 8 up, 8 in 2026-03-09T14:06:53.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:52 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:06:53.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:52 vm03 ceph-mon[58994]: osdmap e330: 8 total, 8 up, 8 in 2026-03-09T14:06:54.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:53 vm04 ceph-mon[54203]: osdmap e331: 8 total, 8 up, 8 in 2026-03-09T14:06:54.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:53 vm04 ceph-mon[54203]: pgmap v450: 164 pgs: 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:54.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:53 vm03 ceph-mon[52586]: osdmap e331: 8 total, 8 up, 8 in 2026-03-09T14:06:54.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:53 vm03 ceph-mon[52586]: pgmap v450: 164 pgs: 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:54.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:53 vm03 ceph-mon[58994]: osdmap e331: 8 total, 8 up, 8 in 2026-03-09T14:06:54.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:53 vm03 ceph-mon[58994]: pgmap v450: 164 pgs: 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:55.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:54 vm04 ceph-mon[54203]: osdmap e332: 8 total, 8 up, 8 in 2026-03-09T14:06:55.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:54 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:06:55.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:54 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T14:06:55.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:54 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T14:06:55.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:54 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:55.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:54 vm03 ceph-mon[52586]: osdmap e332: 8 total, 8 up, 8 in 2026-03-09T14:06:55.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:54 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:06:55.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:54 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T14:06:55.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:54 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T14:06:55.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:54 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:55.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:54 vm03 ceph-mon[58994]: osdmap e332: 8 total, 8 up, 8 in 2026-03-09T14:06:55.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:54 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T14:06:55.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:54 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T14:06:55.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:54 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]: dispatch 2026-03-09T14:06:55.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:54 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:06:56.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:55 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T14:06:56.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:55 vm04 ceph-mon[54203]: osdmap e333: 8 total, 8 up, 8 in 2026-03-09T14:06:56.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:55 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T14:06:56.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:55 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:06:56.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:55 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:06:56.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:55 vm04 ceph-mon[54203]: pgmap v453: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[52586]: osdmap e333: 8 total, 8 up, 8 in 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[52586]: pgmap v453: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app1"}]': finished 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[58994]: osdmap e333: 8 total, 8 up, 8 in 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2"}]: dispatch 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]: dispatch 2026-03-09T14:06:56.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:55 vm03 ceph-mon[58994]: pgmap v453: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 477 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:06:57.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:56 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T14:06:57.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:56 vm04 ceph-mon[54203]: osdmap e334: 8 total, 8 up, 8 in 2026-03-09T14:06:57.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:56 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T14:06:57.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:56 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:06:57.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:56 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:06:57.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:56 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T14:06:57.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:56 vm03 ceph-mon[52586]: osdmap e334: 8 total, 8 up, 8 in 2026-03-09T14:06:57.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:56 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T14:06:57.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:56 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:06:57.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:56 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:06:57.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:56 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "test_pool","app": "app2","yes_i_really_mean_it": true}]': finished 2026-03-09T14:06:57.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:56 vm03 ceph-mon[58994]: osdmap e334: 8 total, 8 up, 8 in 2026-03-09T14:06:57.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:56 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"dne","key":"key","value":"key"}]: dispatch 2026-03-09T14:06:57.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:56 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:06:57.292 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:56 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:06:58.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:58 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T14:06:58.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:58 vm04 ceph-mon[54203]: osdmap e335: 8 total, 8 up, 8 in 2026-03-09T14:06:58.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:58 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T14:06:58.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:58 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T14:06:58.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:58 vm04 ceph-mon[54203]: pgmap v456: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:58 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T14:06:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:58 vm03 ceph-mon[52586]: osdmap e335: 8 total, 8 up, 8 in 2026-03-09T14:06:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:58 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T14:06:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:58 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T14:06:58.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:58 vm03 ceph-mon[52586]: pgmap v456: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:58.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:58 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key1","value":"val1"}]': finished 2026-03-09T14:06:58.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:58 vm03 ceph-mon[58994]: osdmap e335: 8 total, 8 up, 8 in 2026-03-09T14:06:58.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:58 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T14:06:58.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:58 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]: dispatch 2026-03-09T14:06:58.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:58 vm03 ceph-mon[58994]: pgmap v456: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:06:59.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:59 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T14:06:59.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:59 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:06:59.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:59 vm04 ceph-mon[54203]: osdmap e336: 8 total, 8 up, 8 in 2026-03-09T14:06:59.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:06:59 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:06:59.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:59 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T14:06:59.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:59 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:06:59.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:59 vm03 ceph-mon[52586]: osdmap e336: 8 total, 8 up, 8 in 2026-03-09T14:06:59.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:06:59 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:06:59.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:59 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app1","key":"key2","value":"val2"}]': finished 2026-03-09T14:06:59.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:59 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:06:59.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:59 vm03 ceph-mon[58994]: osdmap e336: 8 total, 8 up, 8 in 2026-03-09T14:06:59.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:06:59 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]: dispatch 2026-03-09T14:07:00.491 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:07:00 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:07:00.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:00 vm04 ceph-mon[54203]: pgmap v458: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:00.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:00 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T14:07:00.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:00 vm04 ceph-mon[54203]: osdmap e337: 8 total, 8 up, 8 in 2026-03-09T14:07:00.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:00 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T14:07:00.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:00 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T14:07:00.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:00 vm03 ceph-mon[58994]: pgmap v458: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:00.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:00 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T14:07:00.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:00 vm03 ceph-mon[58994]: osdmap e337: 8 total, 8 up, 8 in 2026-03-09T14:07:00.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:00 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T14:07:00.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:00 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T14:07:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:00 vm03 ceph-mon[52586]: pgmap v458: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:00 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application set","pool":"test_pool","app":"app2","key":"key1","value":"val1"}]': finished 2026-03-09T14:07:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:00 vm03 ceph-mon[52586]: osdmap e337: 8 total, 8 up, 8 in 2026-03-09T14:07:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:00 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T14:07:00.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:00 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]: dispatch 2026-03-09T14:07:01.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:01 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T14:07:01.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:01 vm03 ceph-mon[52586]: osdmap e338: 8 total, 8 up, 8 in 2026-03-09T14:07:01.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:01 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:01.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:01 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:01.443 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:01 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:01.444 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:01 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T14:07:01.444 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:01 vm03 ceph-mon[58994]: osdmap e338: 8 total, 8 up, 8 in 2026-03-09T14:07:01.444 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:01 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:01.444 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:01 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:01.444 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:01 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:01.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:01 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix":"osd pool application rm","pool":"test_pool","app":"app1","key":"key1"}]': finished 2026-03-09T14:07:01.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:01 vm04 ceph-mon[54203]: osdmap e338: 8 total, 8 up, 8 in 2026-03-09T14:07:01.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:01 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2487293595' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:01.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:01 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:01.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:01 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:02.116 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_applications PASSED [ 84%] 2026-03-09T14:07:02.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:02 vm04 ceph-mon[54203]: pgmap v461: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:02.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:02 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:02.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:02 vm04 ceph-mon[54203]: osdmap e339: 8 total, 8 up, 8 in 2026-03-09T14:07:02.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:02 vm03 ceph-mon[52586]: pgmap v461: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:02.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:02 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:02.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:02 vm03 ceph-mon[52586]: osdmap e339: 8 total, 8 up, 8 in 2026-03-09T14:07:02.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:02 vm03 ceph-mon[58994]: pgmap v461: 196 pgs: 196 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:02.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:02 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:02.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:02 vm03 ceph-mon[58994]: osdmap e339: 8 total, 8 up, 8 in 2026-03-09T14:07:03.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:03 vm04 ceph-mon[54203]: osdmap e340: 8 total, 8 up, 8 in 2026-03-09T14:07:03.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:03 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:07:03] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:07:03.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:03 vm03 ceph-mon[52586]: osdmap e340: 8 total, 8 up, 8 in 2026-03-09T14:07:03.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:03 vm03 ceph-mon[58994]: osdmap e340: 8 total, 8 up, 8 in 2026-03-09T14:07:04.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:04 vm04 ceph-mon[54203]: pgmap v464: 164 pgs: 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:04.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:04 vm04 ceph-mon[54203]: osdmap e341: 8 total, 8 up, 8 in 2026-03-09T14:07:04.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:04 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2283619370' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:04.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:04 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:04 vm03 ceph-mon[52586]: pgmap v464: 164 pgs: 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:04 vm03 ceph-mon[52586]: osdmap e341: 8 total, 8 up, 8 in 2026-03-09T14:07:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:04 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2283619370' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:04.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:04 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:04.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:04 vm03 ceph-mon[58994]: pgmap v464: 164 pgs: 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:04.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:04 vm03 ceph-mon[58994]: osdmap e341: 8 total, 8 up, 8 in 2026-03-09T14:07:04.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:04 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2283619370' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:04.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:04 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:05.224 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_service_daemon PASSED [ 85%] 2026-03-09T14:07:05.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:05 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:05.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:05 vm04 ceph-mon[54203]: osdmap e342: 8 total, 8 up, 8 in 2026-03-09T14:07:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:05 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:05.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:05 vm03 ceph-mon[52586]: osdmap e342: 8 total, 8 up, 8 in 2026-03-09T14:07:05.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:05 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:05.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:05 vm03 ceph-mon[58994]: osdmap e342: 8 total, 8 up, 8 in 2026-03-09T14:07:06.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:06 vm04 ceph-mon[54203]: pgmap v467: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:07:06.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:06 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:06.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:06 vm04 ceph-mon[54203]: osdmap e343: 8 total, 8 up, 8 in 2026-03-09T14:07:06.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:06 vm03 ceph-mon[52586]: pgmap v467: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:07:06.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:06 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:06.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:06 vm03 ceph-mon[52586]: osdmap e343: 8 total, 8 up, 8 in 2026-03-09T14:07:06.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:06 vm03 ceph-mon[58994]: pgmap v467: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 478 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:07:06.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:06 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:06.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:06 vm03 ceph-mon[58994]: osdmap e343: 8 total, 8 up, 8 in 2026-03-09T14:07:07.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:07 vm04 ceph-mon[54203]: osdmap e344: 8 total, 8 up, 8 in 2026-03-09T14:07:07.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:07 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/1770624275' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:07.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:07 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:07.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:07 vm03 ceph-mon[52586]: osdmap e344: 8 total, 8 up, 8 in 2026-03-09T14:07:07.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:07 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/1770624275' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:07.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:07 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:07.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:07 vm03 ceph-mon[58994]: osdmap e344: 8 total, 8 up, 8 in 2026-03-09T14:07:07.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:07 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/1770624275' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:07.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:07 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:08.255 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx::test_alignment PASSED [ 86%] 2026-03-09T14:07:08.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:08 vm04 ceph-mon[54203]: pgmap v470: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:08.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:08 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:08.491 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:08 vm04 ceph-mon[54203]: osdmap e345: 8 total, 8 up, 8 in 2026-03-09T14:07:08.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:08 vm03 ceph-mon[52586]: pgmap v470: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:08.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:08 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:08.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:08 vm03 ceph-mon[52586]: osdmap e345: 8 total, 8 up, 8 in 2026-03-09T14:07:08.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:08 vm03 ceph-mon[58994]: pgmap v470: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:08.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:08 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:08.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:08 vm03 ceph-mon[58994]: osdmap e345: 8 total, 8 up, 8 in 2026-03-09T14:07:09.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:09 vm03 ceph-mon[52586]: osdmap e346: 8 total, 8 up, 8 in 2026-03-09T14:07:09.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:09 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/500384667' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T14:07:09.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:09 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T14:07:09.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:09 vm03 ceph-mon[58994]: osdmap e346: 8 total, 8 up, 8 in 2026-03-09T14:07:09.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:09 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/500384667' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T14:07:09.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:09 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T14:07:09.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:09 vm04 ceph-mon[54203]: osdmap e346: 8 total, 8 up, 8 in 2026-03-09T14:07:09.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:09 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/500384667' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T14:07:09.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:09 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]: dispatch 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[52586]: pgmap v473: 164 pgs: 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[52586]: osdmap e347: 8 total, 8 up, 8 in 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/500384667' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[58994]: pgmap v473: 164 pgs: 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[58994]: osdmap e347: 8 total, 8 up, 8 in 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/500384667' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T14:07:10.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:10 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:07:10.741 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:07:10 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:07:10.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:10 vm04 ceph-mon[54203]: pgmap v473: 164 pgs: 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:10.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:10 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd erasure-code-profile set", "name": "testprofile-test-ec", "profile": ["k=2", "m=1", "crush-failure-domain=osd"]}]': finished 2026-03-09T14:07:10.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:10 vm04 ceph-mon[54203]: osdmap e347: 8 total, 8 up, 8 in 2026-03-09T14:07:10.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:10 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/500384667' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T14:07:10.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:10 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]: dispatch 2026-03-09T14:07:10.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:10 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:07:11.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[52586]: osdmap e348: 8 total, 8 up, 8 in 2026-03-09T14:07:11.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:11.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:11.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T14:07:11.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[52586]: osdmap e349: 8 total, 8 up, 8 in 2026-03-09T14:07:11.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/500384667' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:11.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[58994]: osdmap e348: 8 total, 8 up, 8 in 2026-03-09T14:07:11.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:11.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:11.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T14:07:11.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[58994]: osdmap e349: 8 total, 8 up, 8 in 2026-03-09T14:07:11.543 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:11 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/500384667' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:11 vm04 ceph-mon[54203]: osdmap e348: 8 total, 8 up, 8 in 2026-03-09T14:07:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:11 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:11 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:11 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 8, "pgp_num": 8, "pool": "test-ec", "pool_type": "erasure", "erasure_code_profile": "testprofile-test-ec"}]': finished 2026-03-09T14:07:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:11 vm04 ceph-mon[54203]: osdmap e349: 8 total, 8 up, 8 in 2026-03-09T14:07:11.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:11 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/500384667' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:12.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:12 vm03 ceph-mon[52586]: pgmap v476: 164 pgs: 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:12.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:12 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:12.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:12 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:12.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:12 vm03 ceph-mon[52586]: osdmap e350: 8 total, 8 up, 8 in 2026-03-09T14:07:12.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:12 vm03 ceph-mon[58994]: pgmap v476: 164 pgs: 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:12.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:12 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:12.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:12 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:12.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:12 vm03 ceph-mon[58994]: osdmap e350: 8 total, 8 up, 8 in 2026-03-09T14:07:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:12 vm04 ceph-mon[54203]: pgmap v476: 164 pgs: 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:12 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:12 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:12.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:12 vm04 ceph-mon[54203]: osdmap e350: 8 total, 8 up, 8 in 2026-03-09T14:07:13.270 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctxEc::test_alignment PASSED [ 87%] 2026-03-09T14:07:13.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:13 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:07:13] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:07:14.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:14 vm03 ceph-mon[52586]: pgmap v479: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:14.542 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:14 vm03 ceph-mon[52586]: osdmap e351: 8 total, 8 up, 8 in 2026-03-09T14:07:14.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:14 vm03 ceph-mon[58994]: pgmap v479: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:14.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:14 vm03 ceph-mon[58994]: osdmap e351: 8 total, 8 up, 8 in 2026-03-09T14:07:14.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:14 vm04 ceph-mon[54203]: pgmap v479: 172 pgs: 8 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:14.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:14 vm04 ceph-mon[54203]: osdmap e351: 8 total, 8 up, 8 in 2026-03-09T14:07:15.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:15 vm04 ceph-mon[54203]: osdmap e352: 8 total, 8 up, 8 in 2026-03-09T14:07:15.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:15 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2361059637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:15.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:15 vm03 ceph-mon[52586]: osdmap e352: 8 total, 8 up, 8 in 2026-03-09T14:07:15.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:15 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2361059637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:15.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:15 vm03 ceph-mon[58994]: osdmap e352: 8 total, 8 up, 8 in 2026-03-09T14:07:15.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:15 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2361059637' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:16.284 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx2::test_get_last_version PASSED [ 89%] 2026-03-09T14:07:16.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:16 vm04 ceph-mon[54203]: pgmap v482: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:07:16.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:16 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2361059637' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:16.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:16 vm04 ceph-mon[54203]: osdmap e353: 8 total, 8 up, 8 in 2026-03-09T14:07:16.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:16 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:16.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:16 vm04 ceph-mon[54203]: osdmap e354: 8 total, 8 up, 8 in 2026-03-09T14:07:16.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:16 vm03 ceph-mon[52586]: pgmap v482: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:07:16.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:16 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2361059637' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:16.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:16 vm03 ceph-mon[52586]: osdmap e353: 8 total, 8 up, 8 in 2026-03-09T14:07:16.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:16 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:16.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:16 vm03 ceph-mon[52586]: osdmap e354: 8 total, 8 up, 8 in 2026-03-09T14:07:16.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:16 vm03 ceph-mon[58994]: pgmap v482: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 479 MiB used, 160 GiB / 160 GiB avail 2026-03-09T14:07:16.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:16 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2361059637' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:16.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:16 vm03 ceph-mon[58994]: osdmap e353: 8 total, 8 up, 8 in 2026-03-09T14:07:16.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:16 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:16.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:16 vm03 ceph-mon[58994]: osdmap e354: 8 total, 8 up, 8 in 2026-03-09T14:07:18.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:18 vm04 ceph-mon[54203]: pgmap v485: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:18.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:18 vm04 ceph-mon[54203]: osdmap e355: 8 total, 8 up, 8 in 2026-03-09T14:07:18.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:18 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/714551138' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:18.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:18 vm03 ceph-mon[52586]: pgmap v485: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:18.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:18 vm03 ceph-mon[52586]: osdmap e355: 8 total, 8 up, 8 in 2026-03-09T14:07:18.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:18 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/714551138' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:18.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:18 vm03 ceph-mon[58994]: pgmap v485: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:18.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:18 vm03 ceph-mon[58994]: osdmap e355: 8 total, 8 up, 8 in 2026-03-09T14:07:18.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:18 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/714551138' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:19.290 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoctx2::test_get_stats PASSED [ 90%] 2026-03-09T14:07:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:19 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/714551138' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:19 vm04 ceph-mon[54203]: osdmap e356: 8 total, 8 up, 8 in 2026-03-09T14:07:19.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:19 vm04 ceph-mon[54203]: osdmap e357: 8 total, 8 up, 8 in 2026-03-09T14:07:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:19 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/714551138' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:19 vm03 ceph-mon[52586]: osdmap e356: 8 total, 8 up, 8 in 2026-03-09T14:07:19.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:19 vm03 ceph-mon[52586]: osdmap e357: 8 total, 8 up, 8 in 2026-03-09T14:07:19.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:19 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/714551138' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:19.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:19 vm03 ceph-mon[58994]: osdmap e356: 8 total, 8 up, 8 in 2026-03-09T14:07:19.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:19 vm03 ceph-mon[58994]: osdmap e357: 8 total, 8 up, 8 in 2026-03-09T14:07:20.741 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:07:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:07:20.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:20 vm04 ceph-mon[54203]: pgmap v488: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:20.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:20 vm03 ceph-mon[52586]: pgmap v488: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:20.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:20 vm03 ceph-mon[58994]: pgmap v488: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:21 vm04 ceph-mon[54203]: osdmap e358: 8 total, 8 up, 8 in 2026-03-09T14:07:21.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:21 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:21.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:21 vm03 ceph-mon[52586]: osdmap e358: 8 total, 8 up, 8 in 2026-03-09T14:07:21.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:21 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:21.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:21 vm03 ceph-mon[58994]: osdmap e358: 8 total, 8 up, 8 in 2026-03-09T14:07:21.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:21 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:22.369 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_read PASSED [ 91%] 2026-03-09T14:07:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:22 vm04 ceph-mon[54203]: pgmap v491: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:22 vm04 ceph-mon[54203]: osdmap e359: 8 total, 8 up, 8 in 2026-03-09T14:07:22.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:22 vm04 ceph-mon[54203]: osdmap e360: 8 total, 8 up, 8 in 2026-03-09T14:07:22.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:22 vm03 ceph-mon[52586]: pgmap v491: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:22.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:22 vm03 ceph-mon[52586]: osdmap e359: 8 total, 8 up, 8 in 2026-03-09T14:07:22.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:22 vm03 ceph-mon[52586]: osdmap e360: 8 total, 8 up, 8 in 2026-03-09T14:07:22.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:22 vm03 ceph-mon[58994]: pgmap v491: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:22.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:22 vm03 ceph-mon[58994]: osdmap e359: 8 total, 8 up, 8 in 2026-03-09T14:07:22.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:22 vm03 ceph-mon[58994]: osdmap e360: 8 total, 8 up, 8 in 2026-03-09T14:07:23.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:23 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:07:23] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:07:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:24 vm04 ceph-mon[54203]: pgmap v494: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:24 vm04 ceph-mon[54203]: osdmap e361: 8 total, 8 up, 8 in 2026-03-09T14:07:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:24 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:24.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:24 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:07:24.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:24 vm03 ceph-mon[52586]: pgmap v494: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:24.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:24 vm03 ceph-mon[52586]: osdmap e361: 8 total, 8 up, 8 in 2026-03-09T14:07:24.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:24 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:24.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:24 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:07:24.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:24 vm03 ceph-mon[58994]: pgmap v494: 164 pgs: 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:24.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:24 vm03 ceph-mon[58994]: osdmap e361: 8 total, 8 up, 8 in 2026-03-09T14:07:24.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:24 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:24.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:24 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:07:25.471 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_seek PASSED [ 92%] 2026-03-09T14:07:25.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:25 vm04 ceph-mon[54203]: osdmap e362: 8 total, 8 up, 8 in 2026-03-09T14:07:25.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:25 vm03 ceph-mon[52586]: osdmap e362: 8 total, 8 up, 8 in 2026-03-09T14:07:25.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:25 vm03 ceph-mon[58994]: osdmap e362: 8 total, 8 up, 8 in 2026-03-09T14:07:26.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:26 vm03 ceph-mon[52586]: pgmap v497: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail 2026-03-09T14:07:26.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:26 vm03 ceph-mon[52586]: osdmap e363: 8 total, 8 up, 8 in 2026-03-09T14:07:26.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:26 vm03 ceph-mon[58994]: pgmap v497: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail 2026-03-09T14:07:26.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:26 vm03 ceph-mon[58994]: osdmap e363: 8 total, 8 up, 8 in 2026-03-09T14:07:26.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:26 vm04 ceph-mon[54203]: pgmap v497: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 484 MiB used, 159 GiB / 160 GiB avail 2026-03-09T14:07:26.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:26 vm04 ceph-mon[54203]: osdmap e363: 8 total, 8 up, 8 in 2026-03-09T14:07:27.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:27 vm04 ceph-mon[54203]: osdmap e364: 8 total, 8 up, 8 in 2026-03-09T14:07:27.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:27 vm04 ceph-mon[54203]: osdmap e365: 8 total, 8 up, 8 in 2026-03-09T14:07:28.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:27 vm03 ceph-mon[52586]: osdmap e364: 8 total, 8 up, 8 in 2026-03-09T14:07:28.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:27 vm03 ceph-mon[52586]: osdmap e365: 8 total, 8 up, 8 in 2026-03-09T14:07:28.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:27 vm03 ceph-mon[58994]: osdmap e364: 8 total, 8 up, 8 in 2026-03-09T14:07:28.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:27 vm03 ceph-mon[58994]: osdmap e365: 8 total, 8 up, 8 in 2026-03-09T14:07:28.533 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestObject::test_write PASSED [ 93%] 2026-03-09T14:07:28.892 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:28 vm04 ceph-mon[54203]: pgmap v500: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:28.892 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:28 vm04 ceph-mon[54203]: osdmap e366: 8 total, 8 up, 8 in 2026-03-09T14:07:29.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:28 vm03 ceph-mon[52586]: pgmap v500: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:29.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:28 vm03 ceph-mon[52586]: osdmap e366: 8 total, 8 up, 8 in 2026-03-09T14:07:29.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:28 vm03 ceph-mon[58994]: pgmap v500: 196 pgs: 32 unknown, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:29.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:28 vm03 ceph-mon[58994]: osdmap e366: 8 total, 8 up, 8 in 2026-03-09T14:07:30.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:30 vm04 ceph-mon[54203]: pgmap v503: 164 pgs: 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:30.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:30 vm04 ceph-mon[54203]: osdmap e367: 8 total, 8 up, 8 in 2026-03-09T14:07:30.741 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:30 vm04 ceph-mon[54203]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:30.741 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:07:30 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:07:30.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:30 vm03 ceph-mon[52586]: pgmap v503: 164 pgs: 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:30.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:30 vm03 ceph-mon[52586]: osdmap e367: 8 total, 8 up, 8 in 2026-03-09T14:07:30.792 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:30 vm03 ceph-mon[52586]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:30.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:30 vm03 ceph-mon[58994]: pgmap v503: 164 pgs: 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:30.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:30 vm03 ceph-mon[58994]: osdmap e367: 8 total, 8 up, 8 in 2026-03-09T14:07:30.792 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:30 vm03 ceph-mon[58994]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:31.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:31 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:31.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:31 vm04 ceph-mon[54203]: osdmap e368: 8 total, 8 up, 8 in 2026-03-09T14:07:32.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:31 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:32.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:31 vm03 ceph-mon[52586]: osdmap e368: 8 total, 8 up, 8 in 2026-03-09T14:07:32.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:31 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:32.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:31 vm03 ceph-mon[58994]: osdmap e368: 8 total, 8 up, 8 in 2026-03-09T14:07:32.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:32 vm04 ceph-mon[54203]: pgmap v506: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:32.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:32 vm04 ceph-mon[54203]: osdmap e369: 8 total, 8 up, 8 in 2026-03-09T14:07:33.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:32 vm03 ceph-mon[52586]: pgmap v506: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:33.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:32 vm03 ceph-mon[52586]: osdmap e369: 8 total, 8 up, 8 in 2026-03-09T14:07:33.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:32 vm03 ceph-mon[58994]: pgmap v506: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:33.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:32 vm03 ceph-mon[58994]: osdmap e369: 8 total, 8 up, 8 in 2026-03-09T14:07:33.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:07:33] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:07:33.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:33 vm04 ceph-mon[54203]: osdmap e370: 8 total, 8 up, 8 in 2026-03-09T14:07:34.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:33 vm03 ceph-mon[52586]: osdmap e370: 8 total, 8 up, 8 in 2026-03-09T14:07:34.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:33 vm03 ceph-mon[58994]: osdmap e370: 8 total, 8 up, 8 in 2026-03-09T14:07:34.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:34 vm04 ceph-mon[54203]: pgmap v509: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:34.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:34 vm04 ceph-mon[54203]: osdmap e371: 8 total, 8 up, 8 in 2026-03-09T14:07:35.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:34 vm03 ceph-mon[52586]: pgmap v509: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:35.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:34 vm03 ceph-mon[52586]: osdmap e371: 8 total, 8 up, 8 in 2026-03-09T14:07:35.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:34 vm03 ceph-mon[58994]: pgmap v509: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:35.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:34 vm03 ceph-mon[58994]: osdmap e371: 8 total, 8 up, 8 in 2026-03-09T14:07:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:35 vm04 ceph-mon[54203]: osdmap e372: 8 total, 8 up, 8 in 2026-03-09T14:07:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:35 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3468806810' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:35.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:35 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:35 vm03 ceph-mon[52586]: osdmap e372: 8 total, 8 up, 8 in 2026-03-09T14:07:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:35 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3468806810' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:36.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:35 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:36.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:35 vm03 ceph-mon[58994]: osdmap e372: 8 total, 8 up, 8 in 2026-03-09T14:07:36.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:35 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3468806810' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:36.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:35 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd unset", "key": "noup"}]: dispatch 2026-03-09T14:07:36.613 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestIoCtxSelfManagedSnaps::test PASSED [ 94%] 2026-03-09T14:07:36.630 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_monmap_dump PASSED [ 95%] 2026-03-09T14:07:36.645 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_osd_bench PASSED [ 96%] 2026-03-09T14:07:36.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:36 vm04 ceph-mon[54203]: pgmap v512: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail 2026-03-09T14:07:36.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:36 vm04 ceph-mon[54203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:36.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:36 vm04 ceph-mon[54203]: osdmap e373: 8 total, 8 up, 8 in 2026-03-09T14:07:36.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:36 vm04 ceph-mon[54203]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:37.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:36 vm03 ceph-mon[52586]: pgmap v512: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail 2026-03-09T14:07:37.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:36 vm03 ceph-mon[52586]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:37.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:36 vm03 ceph-mon[52586]: osdmap e373: 8 total, 8 up, 8 in 2026-03-09T14:07:37.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:36 vm03 ceph-mon[52586]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:37.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:36 vm03 ceph-mon[58994]: pgmap v512: 196 pgs: 32 creating+peering, 164 active+clean; 455 KiB data, 485 MiB used, 159 GiB / 160 GiB avail 2026-03-09T14:07:37.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:36 vm03 ceph-mon[58994]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd unset", "key": "noup"}]': finished 2026-03-09T14:07:37.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:36 vm03 ceph-mon[58994]: osdmap e373: 8 total, 8 up, 8 in 2026-03-09T14:07:37.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:36 vm03 ceph-mon[58994]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:37.610 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestCommand::test_ceph_osd_pool_create_utf8 PASSED [ 97%] 2026-03-09T14:07:37.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:37 vm04 ceph-mon[54203]: osdmap e374: 8 total, 8 up, 8 in 2026-03-09T14:07:37.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:37 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2392778664' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T14:07:37.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:37 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2392778664' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:07:37.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:37 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/2392778664' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T14:07:37.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:37 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3759923528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T14:07:38.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:37 vm03 ceph-mon[52586]: osdmap e374: 8 total, 8 up, 8 in 2026-03-09T14:07:38.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:37 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2392778664' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T14:07:38.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:37 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2392778664' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:07:38.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:37 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/2392778664' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T14:07:38.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:37 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3759923528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T14:07:38.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:37 vm03 ceph-mon[58994]: osdmap e374: 8 total, 8 up, 8 in 2026-03-09T14:07:38.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:37 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2392778664' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch 2026-03-09T14:07:38.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:37 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2392778664' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T14:07:38.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:37 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/2392778664' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json", "epoch": 1003}]: dispatch 2026-03-09T14:07:38.043 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:37 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3759923528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]: dispatch 2026-03-09T14:07:38.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:38 vm04 ceph-mon[54203]: pgmap v515: 164 pgs: 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:38.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:38 vm04 ceph-mon[54203]: from='client.? 192.168.123.103:0/3759923528' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T14:07:38.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:38 vm04 ceph-mon[54203]: osdmap e375: 8 total, 8 up, 8 in 2026-03-09T14:07:38.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:38 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:07:38.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:38 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:07:38.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:38 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:07:38.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:38 vm04 ceph-mon[54203]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[52586]: pgmap v515: 164 pgs: 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[52586]: from='client.? 192.168.123.103:0/3759923528' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[52586]: osdmap e375: 8 total, 8 up, 8 in 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[52586]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[58994]: pgmap v515: 164 pgs: 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[58994]: from='client.? 192.168.123.103:0/3759923528' entity='client.admin' cmd='[{"prefix": "osd pool create", "pg_num": 16, "pool": "\u9ec5"}]': finished 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[58994]: osdmap e375: 8 total, 8 up, 8 in 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T14:07:39.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:38 vm03 ceph-mon[58994]: from='mgr.24539 ' entity='mgr.y' 2026-03-09T14:07:39.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:39 vm04 ceph-mon[54203]: osdmap e376: 8 total, 8 up, 8 in 2026-03-09T14:07:39.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:39 vm04 ceph-mon[54203]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:07:39.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:39 vm04 ceph-mon[54203]: osdmap e377: 8 total, 8 up, 8 in 2026-03-09T14:07:40.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:39 vm03 ceph-mon[52586]: osdmap e376: 8 total, 8 up, 8 in 2026-03-09T14:07:40.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:39 vm03 ceph-mon[52586]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:07:40.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:39 vm03 ceph-mon[52586]: osdmap e377: 8 total, 8 up, 8 in 2026-03-09T14:07:40.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:39 vm03 ceph-mon[58994]: osdmap e376: 8 total, 8 up, 8 in 2026-03-09T14:07:40.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:39 vm03 ceph-mon[58994]: from='mgr.24539 192.168.123.103:0/353920278' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T14:07:40.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:39 vm03 ceph-mon[58994]: osdmap e377: 8 total, 8 up, 8 in 2026-03-09T14:07:40.621 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestWatchNotify::test PASSED [ 98%] 2026-03-09T14:07:40.664 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:07:40 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug there is no tcmu-runner data available 2026-03-09T14:07:40.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:40 vm04 ceph-mon[54203]: pgmap v518: 212 pgs: 48 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:40.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:40 vm04 ceph-mon[54203]: osdmap e378: 8 total, 8 up, 8 in 2026-03-09T14:07:41.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:40 vm03 ceph-mon[52586]: pgmap v518: 212 pgs: 48 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:41.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:40 vm03 ceph-mon[52586]: osdmap e378: 8 total, 8 up, 8 in 2026-03-09T14:07:41.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:40 vm03 ceph-mon[58994]: pgmap v518: 212 pgs: 48 unknown, 164 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:41.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:40 vm03 ceph-mon[58994]: osdmap e378: 8 total, 8 up, 8 in 2026-03-09T14:07:41.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:41 vm04 ceph-mon[54203]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:41.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:41 vm04 ceph-mon[54203]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:41.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:41 vm04 ceph-mon[54203]: osdmap e379: 8 total, 8 up, 8 in 2026-03-09T14:07:42.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:41 vm03 ceph-mon[52586]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:42.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:41 vm03 ceph-mon[52586]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:42.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:41 vm03 ceph-mon[52586]: osdmap e379: 8 total, 8 up, 8 in 2026-03-09T14:07:42.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:41 vm03 ceph-mon[58994]: from='client.24418 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-09T14:07:42.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:41 vm03 ceph-mon[58994]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-09T14:07:42.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:41 vm03 ceph-mon[58994]: osdmap e379: 8 total, 8 up, 8 in 2026-03-09T14:07:42.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:42 vm04 ceph-mon[54203]: pgmap v521: 180 pgs: 180 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:42.991 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:42 vm04 ceph-mon[54203]: osdmap e380: 8 total, 8 up, 8 in 2026-03-09T14:07:43.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:42 vm03 ceph-mon[52586]: pgmap v521: 180 pgs: 180 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:43.042 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:42 vm03 ceph-mon[52586]: osdmap e380: 8 total, 8 up, 8 in 2026-03-09T14:07:43.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:42 vm03 ceph-mon[58994]: pgmap v521: 180 pgs: 180 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:43.042 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:42 vm03 ceph-mon[58994]: osdmap e380: 8 total, 8 up, 8 in 2026-03-09T14:07:43.542 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:43 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y[52795]: ::ffff:192.168.123.104 - - [09/Mar/2026:14:07:43] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-09T14:07:44.759 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py::TestWatchNotify::test_aio_notify PASSED [100%] 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout:=============================== warnings summary =============================== 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:210 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:210: DeprecationWarning: invalid escape sequence \- 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: assert re.match('[0-9a-f\-]{36}', fsid, re.I) 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:960 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:960: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: @pytest.mark.wait 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:996 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:996: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: @pytest.mark.wait 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout:../../../clone.client.0/src/test/pybind/test_rados.py:1024 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: /home/ubuntu/cephtest/clone.client.0/src/test/pybind/test_rados.py:1024: PytestUnknownMarkWarning: Unknown pytest.mark.wait - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: @pytest.mark.wait 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout::210 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: :210: DeprecationWarning: invalid escape sequence \- 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout: 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout:-- Docs: https://docs.pytest.org/en/stable/warnings.html 2026-03-09T14:07:44.760 INFO:tasks.workunit.client.0.vm03.stdout:================= 91 passed, 13 warnings in 331.26s (0:05:31) ================== 2026-03-09T14:07:44.782 INFO:tasks.workunit.client.0.vm03.stderr:+ exit 0 2026-03-09T14:07:44.782 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-09T14:07:44.782 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-09T14:07:44.819 INFO:tasks.workunit:Stopping ['rados/test_python.sh'] on client.0... 2026-03-09T14:07:44.819 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-09T14:07:45.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:44 vm04 ceph-mon[54203]: pgmap v524: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:45.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:44 vm04 ceph-mon[54203]: osdmap e381: 8 total, 8 up, 8 in 2026-03-09T14:07:45.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:44 vm03 ceph-mon[52586]: pgmap v524: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:45.257 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:44 vm03 ceph-mon[52586]: osdmap e381: 8 total, 8 up, 8 in 2026-03-09T14:07:45.257 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:44 vm03 ceph-mon[58994]: pgmap v524: 212 pgs: 32 unknown, 180 active+clean; 455 KiB data, 486 MiB used, 159 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-09T14:07:45.257 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:44 vm03 ceph-mon[58994]: osdmap e381: 8 total, 8 up, 8 in 2026-03-09T14:07:45.259 DEBUG:teuthology.parallel:result is None 2026-03-09T14:07:45.259 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T14:07:45.285 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T14:07:45.285 DEBUG:teuthology.orchestra.run.vm03:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-09T14:07:45.343 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-09T14:07:45.343 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T14:07:45.346 INFO:tasks.cephadm:Teardown begin 2026-03-09T14:07:45.346 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:07:45.408 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:07:45.437 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T14:07:45.437 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 -- ceph mgr module disable cephadm 2026-03-09T14:07:45.655 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/mon.a/config 2026-03-09T14:07:45.678 INFO:teuthology.orchestra.run.vm03.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-09T14:07:45.714 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-09T14:07:45.714 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T14:07:45.715 DEBUG:teuthology.orchestra.run.vm03:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T14:07:45.738 DEBUG:teuthology.orchestra.run.vm04:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T14:07:45.757 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T14:07:45.757 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T14:07:45.757 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.a 2026-03-09T14:07:45.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:45 vm03 systemd[1]: Stopping Ceph mon.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:07:45.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:45 vm03 ceph-mon[52586]: osdmap e382: 8 total, 8 up, 8 in 2026-03-09T14:07:45.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a[52582]: 2026-03-09T14:07:45.914+0000 7f227bb3a640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:07:45.971 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 14:07:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-a[52582]: 2026-03-09T14:07:45.914+0000 7f227bb3a640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-09T14:07:45.971 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:45 vm03 ceph-mon[58994]: osdmap e382: 8 total, 8 up, 8 in 2026-03-09T14:07:46.126 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.a.service' 2026-03-09T14:07:46.167 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:07:46.168 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T14:07:46.168 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-09T14:07:46.168 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.c 2026-03-09T14:07:46.241 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:45 vm04 ceph-mon[54203]: osdmap e382: 8 total, 8 up, 8 in 2026-03-09T14:07:46.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:46 vm03 systemd[1]: Stopping Ceph mon.c for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:07:46.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:46 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-c[58990]: 2026-03-09T14:07:46.348+0000 7fb895387640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:07:46.542 INFO:journalctl@ceph.mon.c.vm03.stdout:Mar 09 14:07:46 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-c[58990]: 2026-03-09T14:07:46.348+0000 7fb895387640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-09T14:07:46.686 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.c.service' 2026-03-09T14:07:46.722 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:07:46.722 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-09T14:07:46.722 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-09T14:07:46.722 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.b 2026-03-09T14:07:47.002 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:46 vm04 systemd[1]: Stopping Ceph mon.b for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:07:47.002 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:46 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-b[54199]: 2026-03-09T14:07:46.832+0000 7fe7c5588640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:07:47.002 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:46 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-b[54199]: 2026-03-09T14:07:46.832+0000 7fe7c5588640 -1 mon.b@1(peon) e3 *** Got Signal Terminated *** 2026-03-09T14:07:47.002 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:46 vm04 podman[82463]: 2026-03-09 14:07:46.915934557 +0000 UTC m=+0.097943878 container died a09c1f8ad3efcaaabf7917fc162cc374d4c856471cc88ac0a9567fa98d8eba0c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-b, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid) 2026-03-09T14:07:47.002 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:46 vm04 podman[82463]: 2026-03-09 14:07:46.935654127 +0000 UTC m=+0.117663448 container remove a09c1f8ad3efcaaabf7917fc162cc374d4c856471cc88ac0a9567fa98d8eba0c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-b, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, CEPH_REF=squid) 2026-03-09T14:07:47.002 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 14:07:46 vm04 bash[82463]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mon-b 2026-03-09T14:07:47.007 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mon.b.service' 2026-03-09T14:07:47.048 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:07:47.049 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-09T14:07:47.049 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-09T14:07:47.049 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.y 2026-03-09T14:07:47.304 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.y.service' 2026-03-09T14:07:47.327 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:47 vm03 systemd[1]: Stopping Ceph mgr.y for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:07:47.327 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:47 vm03 podman[92320]: 2026-03-09 14:07:47.197265546 +0000 UTC m=+0.056009882 container died 7d78d84012cdd2baa35b8613bb8d8f6a5cfeea6c6fb9957b3e66c4f311f3e5c9 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T14:07:47.327 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:47 vm03 podman[92320]: 2026-03-09 14:07:47.226141587 +0000 UTC m=+0.084885923 container remove 7d78d84012cdd2baa35b8613bb8d8f6a5cfeea6c6fb9957b3e66c4f311f3e5c9 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T14:07:47.327 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:47 vm03 bash[92320]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-mgr-y 2026-03-09T14:07:47.327 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:47 vm03 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.y.service: Deactivated successfully. 2026-03-09T14:07:47.327 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:47 vm03 systemd[1]: Stopped Ceph mgr.y for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:07:47.327 INFO:journalctl@ceph.mgr.y.vm03.stdout:Mar 09 14:07:47 vm03 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.y.service: Consumed 22.565s CPU time. 2026-03-09T14:07:47.337 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:07:47.337 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-09T14:07:47.337 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-09T14:07:47.337 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.x 2026-03-09T14:07:47.582 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@mgr.x.service' 2026-03-09T14:07:47.621 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:07:47.621 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-09T14:07:47.621 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T14:07:47.621 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.0 2026-03-09T14:07:48.042 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:47 vm03 systemd[1]: Stopping Ceph osd.0 for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:07:48.042 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0[62907]: 2026-03-09T14:07:47.726+0000 7f97e7c79640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:07:48.042 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0[62907]: 2026-03-09T14:07:47.726+0000 7f97e7c79640 -1 osd.0 382 *** Got signal Terminated *** 2026-03-09T14:07:48.042 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:47 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0[62907]: 2026-03-09T14:07:47.726+0000 7f97e7c79640 -1 osd.0 382 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:07:53.043 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:52 vm03 podman[92425]: 2026-03-09 14:07:52.762357072 +0000 UTC m=+5.052059326 container died 620a84290db0837c9d095a135dc29014cc44b31bf0af818d0f7dc2691553f371 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T14:07:53.521 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:53 vm03 podman[92425]: 2026-03-09 14:07:53.262376636 +0000 UTC m=+5.552078910 container remove 620a84290db0837c9d095a135dc29014cc44b31bf0af818d0f7dc2691553f371 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) 2026-03-09T14:07:53.521 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:53 vm03 bash[92425]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0 2026-03-09T14:07:53.521 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:53 vm03 podman[92493]: 2026-03-09 14:07:53.467938799 +0000 UTC m=+0.061295734 container create 0432a360ebe7d608240ad2168891431ac528efbfba0213dae8ba5bbd722a06ae (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0-deactivate, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) 2026-03-09T14:07:53.521 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:53 vm03 podman[92493]: 2026-03-09 14:07:53.418051851 +0000 UTC m=+0.011408795 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T14:07:53.792 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:53 vm03 podman[92493]: 2026-03-09 14:07:53.637613399 +0000 UTC m=+0.230970334 container init 0432a360ebe7d608240ad2168891431ac528efbfba0213dae8ba5bbd722a06ae (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0-deactivate, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, ceph=True) 2026-03-09T14:07:53.792 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:53 vm03 podman[92493]: 2026-03-09 14:07:53.640898668 +0000 UTC m=+0.234255603 container start 0432a360ebe7d608240ad2168891431ac528efbfba0213dae8ba5bbd722a06ae (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0-deactivate, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True) 2026-03-09T14:07:53.792 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 14:07:53 vm03 podman[92493]: 2026-03-09 14:07:53.649668093 +0000 UTC m=+0.243025038 container attach 0432a360ebe7d608240ad2168891431ac528efbfba0213dae8ba5bbd722a06ae (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-0-deactivate, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T14:07:53.928 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.0.service' 2026-03-09T14:07:53.966 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:07:53.966 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T14:07:53.966 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T14:07:53.966 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.1 2026-03-09T14:07:54.144 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:07:54 vm03 systemd[1]: Stopping Ceph osd.1 for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:07:54.542 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:07:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1[67912]: 2026-03-09T14:07:54.137+0000 7f6fc6e62640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:07:54.542 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:07:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1[67912]: 2026-03-09T14:07:54.137+0000 7f6fc6e62640 -1 osd.1 382 *** Got signal Terminated *** 2026-03-09T14:07:54.542 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:07:54 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1[67912]: 2026-03-09T14:07:54.137+0000 7f6fc6e62640 -1 osd.1 382 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:07:59.423 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:07:59 vm03 podman[92587]: 2026-03-09 14:07:59.164222822 +0000 UTC m=+5.047870464 container died 7a79f446a30b8005319e7fa65c97a832088a9865d106b542fcf5f4a126fc9038 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T14:07:59.424 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:07:59 vm03 podman[92587]: 2026-03-09 14:07:59.229171534 +0000 UTC m=+5.112819176 container remove 7a79f446a30b8005319e7fa65c97a832088a9865d106b542fcf5f4a126fc9038 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1, org.label-schema.build-date=20260223, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-09T14:07:59.424 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:07:59 vm03 bash[92587]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1 2026-03-09T14:07:59.424 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 09 14:07:59 vm03 podman[92668]: 2026-03-09 14:07:59.391504892 +0000 UTC m=+0.019244002 container create fb15d43229631fd2df798c56fe7123a26d82f7a9508c16dd9e42e851483ce5c3 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-1-deactivate, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20260223, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS) 2026-03-09T14:07:59.596 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.1.service' 2026-03-09T14:07:59.631 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:07:59.631 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T14:07:59.631 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T14:07:59.631 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.2 2026-03-09T14:08:00.042 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:07:59 vm03 systemd[1]: Stopping Ceph osd.2 for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:08:00.042 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:07:59 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2[73032]: 2026-03-09T14:07:59.762+0000 7f83edf36640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:08:00.042 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:07:59 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2[73032]: 2026-03-09T14:07:59.762+0000 7f83edf36640 -1 osd.2 382 *** Got signal Terminated *** 2026-03-09T14:08:00.042 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:07:59 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2[73032]: 2026-03-09T14:07:59.762+0000 7f83edf36640 -1 osd.2 382 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:08:05.087 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:08:04 vm03 podman[92763]: 2026-03-09 14:08:04.824392237 +0000 UTC m=+5.074050290 container died a4b0cc72d1932939489cb83189752d2ca03a218aa61c0aa96155b76c02714213 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T14:08:05.087 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:08:04 vm03 podman[92763]: 2026-03-09 14:08:04.868143972 +0000 UTC m=+5.117802025 container remove a4b0cc72d1932939489cb83189752d2ca03a218aa61c0aa96155b76c02714213 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2) 2026-03-09T14:08:05.087 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:08:04 vm03 bash[92763]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2 2026-03-09T14:08:05.087 INFO:journalctl@ceph.osd.2.vm03.stdout:Mar 09 14:08:05 vm03 podman[92834]: 2026-03-09 14:08:05.044947676 +0000 UTC m=+0.024079885 container create 21bac148132211bd18bb0c24fdf724749de54f8843f3fcb43953a4466a4b4566 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-2-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS) 2026-03-09T14:08:05.275 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.2.service' 2026-03-09T14:08:05.313 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:08:05.313 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T14:08:05.313 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-09T14:08:05.313 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.3 2026-03-09T14:08:05.792 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:08:05 vm03 systemd[1]: Stopping Ceph osd.3 for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:08:05.792 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:08:05 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3[78109]: 2026-03-09T14:08:05.471+0000 7faa0b20d640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:08:05.792 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:08:05 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3[78109]: 2026-03-09T14:08:05.471+0000 7faa0b20d640 -1 osd.3 382 *** Got signal Terminated *** 2026-03-09T14:08:05.792 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:08:05 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3[78109]: 2026-03-09T14:08:05.471+0000 7faa0b20d640 -1 osd.3 382 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:08:10.792 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:08:10 vm03 podman[92929]: 2026-03-09 14:08:10.514398118 +0000 UTC m=+5.057574760 container died 11cf0bf07f7ea5bc40fe8d781e5209316f9817e4b723d6498fdada6bbe2d012e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3, ceph=True, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default) 2026-03-09T14:08:10.792 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:08:10 vm03 podman[92929]: 2026-03-09 14:08:10.536053824 +0000 UTC m=+5.079230475 container remove 11cf0bf07f7ea5bc40fe8d781e5209316f9817e4b723d6498fdada6bbe2d012e (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T14:08:10.792 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:08:10 vm03 bash[92929]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3 2026-03-09T14:08:10.792 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:08:10 vm03 podman[92998]: 2026-03-09 14:08:10.698623925 +0000 UTC m=+0.017057580 container create 0b4ae4e7fdf77365d77385733080b69542bf28922f25b7ede41ff508398ca39d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3-deactivate, CEPH_REF=squid, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T14:08:10.793 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:08:10 vm03 podman[92998]: 2026-03-09 14:08:10.745365025 +0000 UTC m=+0.063798690 container init 0b4ae4e7fdf77365d77385733080b69542bf28922f25b7ede41ff508398ca39d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3-deactivate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20260223) 2026-03-09T14:08:10.793 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:08:10 vm03 podman[92998]: 2026-03-09 14:08:10.748852243 +0000 UTC m=+0.067285898 container start 0b4ae4e7fdf77365d77385733080b69542bf28922f25b7ede41ff508398ca39d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3-deactivate, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) 2026-03-09T14:08:10.793 INFO:journalctl@ceph.osd.3.vm03.stdout:Mar 09 14:08:10 vm03 podman[92998]: 2026-03-09 14:08:10.750121007 +0000 UTC m=+0.068554662 container attach 0b4ae4e7fdf77365d77385733080b69542bf28922f25b7ede41ff508398ca39d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-3-deactivate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3) 2026-03-09T14:08:10.922 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.3.service' 2026-03-09T14:08:10.960 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:08:10.960 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-09T14:08:10.960 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-09T14:08:10.960 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.4 2026-03-09T14:08:11.241 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:10 vm04 systemd[1]: Stopping Ceph osd.4 for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:08:11.241 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:11 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4[58748]: 2026-03-09T14:08:11.067+0000 7ffb30141640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:08:11.241 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:11 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4[58748]: 2026-03-09T14:08:11.067+0000 7ffb30141640 -1 osd.4 382 *** Got signal Terminated *** 2026-03-09T14:08:11.241 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:11 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4[58748]: 2026-03-09T14:08:11.067+0000 7ffb30141640 -1 osd.4 382 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:08:13.991 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:13 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:13.489+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:14.991 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:14 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:14.520+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:15.491 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:15 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4[58748]: 2026-03-09T14:08:15.186+0000 7ffb2bf59640 -1 osd.4 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:48.854252+0000 front 2026-03-09T14:07:48.854205+0000 (oldest deadline 2026-03-09T14:08:14.753963+0000) 2026-03-09T14:08:15.991 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:15 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:15.541+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:16.364 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:16 vm04 podman[82672]: 2026-03-09 14:08:16.100820873 +0000 UTC m=+5.049490582 container died 20b58dbb3a616adda833fb1c1bd37bce02917b47ccfe71a515019bf009cc0250 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2) 2026-03-09T14:08:16.364 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:16 vm04 podman[82672]: 2026-03-09 14:08:16.122876004 +0000 UTC m=+5.071545704 container remove 20b58dbb3a616adda833fb1c1bd37bce02917b47ccfe71a515019bf009cc0250 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default) 2026-03-09T14:08:16.364 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:16 vm04 bash[82672]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4 2026-03-09T14:08:16.364 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:16 vm04 podman[82749]: 2026-03-09 14:08:16.269024309 +0000 UTC m=+0.017079850 container create 0b8e974d73110359db754e897c9e884d252f160367e999dabcef91bd037e9e11 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4-deactivate, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2) 2026-03-09T14:08:16.364 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:16 vm04 podman[82749]: 2026-03-09 14:08:16.307335919 +0000 UTC m=+0.055391460 container init 0b8e974d73110359db754e897c9e884d252f160367e999dabcef91bd037e9e11 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4-deactivate, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS) 2026-03-09T14:08:16.364 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:16 vm04 podman[82749]: 2026-03-09 14:08:16.310433897 +0000 UTC m=+0.058489428 container start 0b8e974d73110359db754e897c9e884d252f160367e999dabcef91bd037e9e11 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4-deactivate, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid) 2026-03-09T14:08:16.364 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:16 vm04 podman[82749]: 2026-03-09 14:08:16.311405185 +0000 UTC m=+0.059460726 container attach 0b8e974d73110359db754e897c9e884d252f160367e999dabcef91bd037e9e11 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-4-deactivate, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS) 2026-03-09T14:08:16.364 INFO:journalctl@ceph.osd.4.vm04.stdout:Mar 09 14:08:16 vm04 podman[82749]: 2026-03-09 14:08:16.262616856 +0000 UTC m=+0.010672397 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T14:08:16.488 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.4.service' 2026-03-09T14:08:16.527 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:08:16.527 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-09T14:08:16.527 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-09T14:08:16.528 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.5 2026-03-09T14:08:16.689 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:16 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:16.536+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:16.689 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:16 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:16.546+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:16.689 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:16 vm04 systemd[1]: Stopping Ceph osd.5 for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:08:16.991 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:16 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:08:16.685+0000 7f39b2c83640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:08:16.991 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:16 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:08:16.685+0000 7f39b2c83640 -1 osd.5 382 *** Got signal Terminated *** 2026-03-09T14:08:16.991 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:16 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:08:16.685+0000 7f39b2c83640 -1 osd.5 382 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:08:17.990 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:17 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:17.518+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:17.991 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:17 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:17.507+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:18.465 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:18 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:08:18.097+0000 7f39af29c640 -1 osd.5 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.697135+0000 front 2026-03-09T14:07:51.697162+0000 (oldest deadline 2026-03-09T14:08:17.596539+0000) 2026-03-09T14:08:18.740 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:18 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:18.515+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:18.741 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:18 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:18.462+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:19.450 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:19 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:08:19.130+0000 7f39af29c640 -1 osd.5 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.697135+0000 front 2026-03-09T14:07:51.697162+0000 (oldest deadline 2026-03-09T14:08:17.596539+0000) 2026-03-09T14:08:19.741 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:19 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:19.532+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:19.741 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:19 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:19.446+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:20.491 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:08:20.150+0000 7f39af29c640 -1 osd.5 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.697135+0000 front 2026-03-09T14:07:51.697162+0000 (oldest deadline 2026-03-09T14:08:17.596539+0000) 2026-03-09T14:08:20.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:20.547+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:20.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:20.547+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:20.991 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:20 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:20.491+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:21.491 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:21 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5[63695]: 2026-03-09T14:08:21.187+0000 7f39af29c640 -1 osd.5 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.697135+0000 front 2026-03-09T14:07:51.697162+0000 (oldest deadline 2026-03-09T14:08:17.596539+0000) 2026-03-09T14:08:21.890 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:21 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:21.521+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:21.890 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:21 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:21.521+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:21.891 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:21 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:21.527+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:21.891 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:21 vm04 podman[82846]: 2026-03-09 14:08:21.714239502 +0000 UTC m=+5.044009153 container died 374554d7ad8390acf1d6e109b75cdf7c6e1bd99ea093026d168a00293e77a16f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T14:08:21.891 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:21 vm04 podman[82846]: 2026-03-09 14:08:21.735117891 +0000 UTC m=+5.064887542 container remove 374554d7ad8390acf1d6e109b75cdf7c6e1bd99ea093026d168a00293e77a16f (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.build-date=20260223) 2026-03-09T14:08:21.891 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:21 vm04 bash[82846]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5 2026-03-09T14:08:22.124 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.5.service' 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:21 vm04 podman[82913]: 2026-03-09 14:08:21.888371374 +0000 UTC m=+0.018030129 container create e9a624d313dd2668543780b699da44d2e8de62fddff463c44b9fe4e4a415999c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5-deactivate, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:21 vm04 podman[82913]: 2026-03-09 14:08:21.939471191 +0000 UTC m=+0.069129956 container init e9a624d313dd2668543780b699da44d2e8de62fddff463c44b9fe4e4a415999c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20260223, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:21 vm04 podman[82913]: 2026-03-09 14:08:21.94606324 +0000 UTC m=+0.075721995 container start e9a624d313dd2668543780b699da44d2e8de62fddff463c44b9fe4e4a415999c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5-deactivate, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20260223) 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:21 vm04 podman[82913]: 2026-03-09 14:08:21.953849443 +0000 UTC m=+0.083508198 container attach e9a624d313dd2668543780b699da44d2e8de62fddff463c44b9fe4e4a415999c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5-deactivate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:21 vm04 podman[82913]: 2026-03-09 14:08:21.880805923 +0000 UTC m=+0.010464778 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:22 vm04 podman[82913]: 2026-03-09 14:08:22.092138879 +0000 UTC m=+0.221797623 container died e9a624d313dd2668543780b699da44d2e8de62fddff463c44b9fe4e4a415999c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5-deactivate, org.label-schema.build-date=20260223, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:22 vm04 podman[82913]: 2026-03-09 14:08:22.108787891 +0000 UTC m=+0.238446646 container remove e9a624d313dd2668543780b699da44d2e8de62fddff463c44b9fe4e4a415999c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-5-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:22 vm04 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.5.service: Deactivated successfully. 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:22 vm04 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.5.service: Unit process 82925 (conmon) remains running after unit stopped. 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:22 vm04 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.5.service: Unit process 82933 (podman) remains running after unit stopped. 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:22 vm04 systemd[1]: Stopped Ceph osd.5 for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:08:22.150 INFO:journalctl@ceph.osd.5.vm04.stdout:Mar 09 14:08:22 vm04 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.5.service: Consumed 6.366s CPU time, 328.0M memory peak. 2026-03-09T14:08:22.161 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:08:22.161 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-09T14:08:22.161 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-09T14:08:22.162 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.6 2026-03-09T14:08:22.491 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:22 vm04 systemd[1]: Stopping Ceph osd.6 for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:08:22.491 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:22.304+0000 7f8ad61a8640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:08:22.491 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:22.304+0000 7f8ad61a8640 -1 osd.6 382 *** Got signal Terminated *** 2026-03-09T14:08:22.491 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:22.304+0000 7f8ad61a8640 -1 osd.6 382 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:08:22.824 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:22.491+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:22.824 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:22.491+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:58.173764+0000 front 2026-03-09T14:07:58.173521+0000 (oldest deadline 2026-03-09T14:08:22.273417+0000) 2026-03-09T14:08:22.824 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:22.535+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:22.824 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:22 vm04 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:22.535+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:23.741 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:23 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:23.466+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:23.741 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:23 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:23.466+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:58.173764+0000 front 2026-03-09T14:07:58.173521+0000 (oldest deadline 2026-03-09T14:08:22.273417+0000) 2026-03-09T14:08:23.741 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:23 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:23.571+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:23.741 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:23 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:23.571+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:24.990 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:24 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:24.496+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:24.991 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:24 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:24.496+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:58.173764+0000 front 2026-03-09T14:07:58.173521+0000 (oldest deadline 2026-03-09T14:08:22.273417+0000) 2026-03-09T14:08:24.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:24 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:24.549+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:24.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:24 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:24.549+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:25.740 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:25 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:25.539+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:25.741 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:25 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:25.539+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:25.741 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:25 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:25.477+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:25.741 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:25 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:25.477+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:58.173764+0000 front 2026-03-09T14:07:58.173521+0000 (oldest deadline 2026-03-09T14:08:22.273417+0000) 2026-03-09T14:08:26.741 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:26 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:26.569+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:26.741 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:26 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:26.569+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:26.741 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:26 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:26.569+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-09T14:08:00.529742+0000 front 2026-03-09T14:08:00.529781+0000 (oldest deadline 2026-03-09T14:08:26.429487+0000) 2026-03-09T14:08:26.741 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:26 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:26.459+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:52.572761+0000 front 2026-03-09T14:07:52.572695+0000 (oldest deadline 2026-03-09T14:08:13.072303+0000) 2026-03-09T14:08:26.741 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:26 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:26.459+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:58.173764+0000 front 2026-03-09T14:07:58.173521+0000 (oldest deadline 2026-03-09T14:08:22.273417+0000) 2026-03-09T14:08:26.741 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:26 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6[68697]: 2026-03-09T14:08:26.459+0000 7f8ad27c1640 -1 osd.6 382 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-09T14:08:02.273885+0000 front 2026-03-09T14:08:02.273874+0000 (oldest deadline 2026-03-09T14:08:26.373732+0000) 2026-03-09T14:08:27.592 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:27 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:27.560+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:27.592 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:27 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:27.560+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:27.592 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:27 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:27.560+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-09T14:08:00.529742+0000 front 2026-03-09T14:08:00.529781+0000 (oldest deadline 2026-03-09T14:08:26.429487+0000) 2026-03-09T14:08:27.593 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:27 vm04.local podman[83010]: 2026-03-09 14:08:27.338882199 +0000 UTC m=+5.051646447 container died a3090a59c3a5ffb43f4882bfe94cd7bfcd793f17ad079a35c87ac7a19c8fcfed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0) 2026-03-09T14:08:27.593 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:27 vm04.local podman[83010]: 2026-03-09 14:08:27.370261944 +0000 UTC m=+5.083026192 container remove a3090a59c3a5ffb43f4882bfe94cd7bfcd793f17ad079a35c87ac7a19c8fcfed (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-09T14:08:27.593 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:27 vm04.local bash[83010]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6 2026-03-09T14:08:27.593 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:27 vm04.local podman[83097]: 2026-03-09 14:08:27.541680178 +0000 UTC m=+0.034157516 container create 2310907ee502c0a976ab7dd90a7d8b70953f47272cc144d7c98d252563532a22 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6-deactivate, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-09T14:08:27.593 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:27 vm04.local podman[83097]: 2026-03-09 14:08:27.585830717 +0000 UTC m=+0.078308045 container init 2310907ee502c0a976ab7dd90a7d8b70953f47272cc144d7c98d252563532a22 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T14:08:27.593 INFO:journalctl@ceph.osd.6.vm04.stdout:Mar 09 14:08:27 vm04.local podman[83097]: 2026-03-09 14:08:27.588924458 +0000 UTC m=+0.081401796 container start 2310907ee502c0a976ab7dd90a7d8b70953f47272cc144d7c98d252563532a22 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-6-deactivate, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS) 2026-03-09T14:08:27.762 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.6.service' 2026-03-09T14:08:27.804 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:08:27.804 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-09T14:08:27.804 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-09T14:08:27.804 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.7 2026-03-09T14:08:28.241 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:27 vm04.local systemd[1]: Stopping Ceph osd.7 for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:08:28.241 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:27 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:27.951+0000 7fc9e4ee7640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:08:28.241 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:27 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:27.951+0000 7fc9e4ee7640 -1 osd.7 382 *** Got signal Terminated *** 2026-03-09T14:08:28.241 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:27 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:27.951+0000 7fc9e4ee7640 -1 osd.7 382 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T14:08:28.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:28 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:28.534+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:28.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:28 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:28.534+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:28.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:28 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:28.534+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-09T14:08:00.529742+0000 front 2026-03-09T14:08:00.529781+0000 (oldest deadline 2026-03-09T14:08:26.429487+0000) 2026-03-09T14:08:29.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:29 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:29.574+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:29.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:29 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:29.574+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:29.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:29 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:29.574+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-09T14:08:00.529742+0000 front 2026-03-09T14:08:00.529781+0000 (oldest deadline 2026-03-09T14:08:26.429487+0000) 2026-03-09T14:08:30.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:30 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:30.534+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:30.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:30 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:30.534+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:30.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:30 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:30.534+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-09T14:08:00.529742+0000 front 2026-03-09T14:08:00.529781+0000 (oldest deadline 2026-03-09T14:08:26.429487+0000) 2026-03-09T14:08:31.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:31 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:31.531+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:31.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:31 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:31.531+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:31.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:31 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:31.531+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-09T14:08:00.529742+0000 front 2026-03-09T14:08:00.529781+0000 (oldest deadline 2026-03-09T14:08:26.429487+0000) 2026-03-09T14:08:31.991 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:31 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:31.531+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6815 osd.3 since back 2026-03-09T14:08:08.130469+0000 front 2026-03-09T14:08:08.130219+0000 (oldest deadline 2026-03-09T14:08:31.030265+0000) 2026-03-09T14:08:32.893 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:32 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:32.561+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6803 osd.0 since back 2026-03-09T14:07:51.229212+0000 front 2026-03-09T14:07:51.229153+0000 (oldest deadline 2026-03-09T14:08:15.928640+0000) 2026-03-09T14:08:32.893 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:32 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:32.561+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6807 osd.1 since back 2026-03-09T14:07:55.929394+0000 front 2026-03-09T14:07:55.929385+0000 (oldest deadline 2026-03-09T14:08:20.028899+0000) 2026-03-09T14:08:32.893 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:32 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:32.561+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6811 osd.2 since back 2026-03-09T14:08:00.529742+0000 front 2026-03-09T14:08:00.529781+0000 (oldest deadline 2026-03-09T14:08:26.429487+0000) 2026-03-09T14:08:32.893 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:32 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:32.561+0000 7fc9e1500640 -1 osd.7 382 heartbeat_check: no reply from 192.168.123.103:6815 osd.3 since back 2026-03-09T14:08:08.130469+0000 front 2026-03-09T14:08:08.130219+0000 (oldest deadline 2026-03-09T14:08:31.030265+0000) 2026-03-09T14:08:32.893 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:32 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7[73811]: 2026-03-09T14:08:32.561+0000 7fc9e1500640 -1 osd.7 382 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.24415.0:415 2.0 2:353ec8f9:::gateway.conf:head [getxattr epoch in=5b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e382) 2026-03-09T14:08:33.189 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:32 vm04.local podman[83195]: 2026-03-09 14:08:32.981045521 +0000 UTC m=+5.043361381 container died 5c0219d5dc7da7784cb87719a482345b2a18bff02f61bac2b204606f7de38e9c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7, ceph=True, CEPH_REF=squid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-09T14:08:33.189 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:33 vm04.local podman[83195]: 2026-03-09 14:08:33.031105 +0000 UTC m=+5.093420860 container remove 5c0219d5dc7da7784cb87719a482345b2a18bff02f61bac2b204606f7de38e9c (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default) 2026-03-09T14:08:33.189 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:33 vm04.local bash[83195]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7 2026-03-09T14:08:33.443 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:33 vm04.local podman[83261]: 2026-03-09 14:08:33.186648851 +0000 UTC m=+0.020395226 container create 88fd1fa9fd178c09fd8960a8e9fc59ea645231e20e7f804ffa8a6a4b3c960e00 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7-deactivate, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid) 2026-03-09T14:08:33.443 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:33 vm04.local podman[83261]: 2026-03-09 14:08:33.23518552 +0000 UTC m=+0.068931895 container init 88fd1fa9fd178c09fd8960a8e9fc59ea645231e20e7f804ffa8a6a4b3c960e00 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7-deactivate, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2) 2026-03-09T14:08:33.443 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:33 vm04.local podman[83261]: 2026-03-09 14:08:33.238402461 +0000 UTC m=+0.072148826 container start 88fd1fa9fd178c09fd8960a8e9fc59ea645231e20e7f804ffa8a6a4b3c960e00 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7-deactivate, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T14:08:33.443 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:33 vm04.local podman[83261]: 2026-03-09 14:08:33.239226464 +0000 UTC m=+0.072972839 container attach 88fd1fa9fd178c09fd8960a8e9fc59ea645231e20e7f804ffa8a6a4b3c960e00 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7-deactivate, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-09T14:08:33.443 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:33 vm04.local podman[83261]: 2026-03-09 14:08:33.176857494 +0000 UTC m=+0.010603869 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T14:08:33.443 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:33 vm04.local podman[83281]: 2026-03-09 14:08:33.402377704 +0000 UTC m=+0.010999057 container died 88fd1fa9fd178c09fd8960a8e9fc59ea645231e20e7f804ffa8a6a4b3c960e00 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-09T14:08:33.443 INFO:journalctl@ceph.osd.7.vm04.stdout:Mar 09 14:08:33 vm04.local podman[83281]: 2026-03-09 14:08:33.440574909 +0000 UTC m=+0.049196253 container remove 88fd1fa9fd178c09fd8960a8e9fc59ea645231e20e7f804ffa8a6a4b3c960e00 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-osd-7-deactivate, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-09T14:08:33.458 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@osd.7.service' 2026-03-09T14:08:33.495 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:08:33.495 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-09T14:08:33.495 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-09T14:08:33.495 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@rgw.foo.a 2026-03-09T14:08:33.741 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:08:33 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:08:33.474Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-09T14:08:33.741 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:08:33 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:08:33.476Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-09T14:08:33.741 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:08:33 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:08:33.481Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-09T14:08:33.741 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:08:33 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:08:33.481Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-09T14:08:33.741 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:08:33 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:08:33.483Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-09T14:08:33.741 INFO:journalctl@ceph.prometheus.a.vm04.stdout:Mar 09 14:08:33 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-prometheus-a[80989]: ts=2026-03-09T14:08:33.484Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.103:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.103:8765: connect: connection refused" 2026-03-09T14:08:33.792 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:08:33 vm03 systemd[1]: Stopping Ceph rgw.foo.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:08:33.792 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:08:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-rgw-foo-a[82425]: 2026-03-09T14:08:33.613+0000 7f93de745640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-09T14:08:33.792 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:08:33 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-rgw-foo-a[82425]: 2026-03-09T14:08:33.613+0000 7f93e1fb4980 -1 shutting down 2026-03-09T14:08:43.894 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:08:43 vm03 bash[93094]: time="2026-03-09T14:08:43Z" level=warning msg="StopSignal SIGTERM failed to stop container ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-rgw-foo-a in 10 seconds, resorting to SIGKILL" 2026-03-09T14:08:43.894 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:08:43 vm03 podman[93094]: 2026-03-09 14:08:43.647010083 +0000 UTC m=+10.055957864 container died bbd4b3b8cf7b9026c7d901d20ab4a4d4eec0d7faef8876732178564e1bb27bde (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-rgw-foo-a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, ceph=True) 2026-03-09T14:08:43.894 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:08:43 vm03 podman[93094]: 2026-03-09 14:08:43.853200214 +0000 UTC m=+10.262147995 container remove bbd4b3b8cf7b9026c7d901d20ab4a4d4eec0d7faef8876732178564e1bb27bde (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-rgw-foo-a, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, org.label-schema.schema-version=1.0) 2026-03-09T14:08:43.895 INFO:journalctl@ceph.rgw.foo.a.vm03.stdout:Mar 09 14:08:43 vm03 bash[93094]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-rgw-foo-a 2026-03-09T14:08:43.935 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@rgw.foo.a.service' 2026-03-09T14:08:43.971 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:08:43.971 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-09T14:08:43.971 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-09T14:08:43.971 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@prometheus.a 2026-03-09T14:08:44.174 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@prometheus.a.service' 2026-03-09T14:08:44.208 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T14:08:44.208 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-09T14:08:44.208 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm rm-cluster --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 --force --keep-logs 2026-03-09T14:08:44.350 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T14:08:46.005 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:08:45 vm03 systemd[1]: Stopping Ceph alertmanager.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:08:46.005 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:08:45 vm03 ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a[88879]: ts=2026-03-09T14:08:45.991Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-09T14:08:46.276 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:08:46 vm03 systemd[1]: Stopping Ceph node-exporter.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:08:46.276 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:08:46 vm03 podman[93610]: 2026-03-09 14:08:46.002896756 +0000 UTC m=+0.028815326 container died 7278bf964c26bfe28a93e6c49c26421799ab166b8db8162a2112b7eeaa8fffd4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T14:08:46.276 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:08:46 vm03 podman[93610]: 2026-03-09 14:08:46.020106256 +0000 UTC m=+0.046024826 container remove 7278bf964c26bfe28a93e6c49c26421799ab166b8db8162a2112b7eeaa8fffd4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a, maintainer=The Prometheus Authors ) 2026-03-09T14:08:46.276 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:08:46 vm03 podman[93610]: 2026-03-09 14:08:46.021599382 +0000 UTC m=+0.047517952 volume remove 3a30cf6a270a6077aed3edc7cff43c1f1a549077125f64418c02d7d78def3116 2026-03-09T14:08:46.276 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:08:46 vm03 bash[93610]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-alertmanager-a 2026-03-09T14:08:46.276 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:08:46 vm03 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@alertmanager.a.service: Deactivated successfully. 2026-03-09T14:08:46.276 INFO:journalctl@ceph.alertmanager.a.vm03.stdout:Mar 09 14:08:46 vm03 systemd[1]: Stopped Ceph alertmanager.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:08:46.542 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:08:46 vm03 podman[93710]: 2026-03-09 14:08:46.403036256 +0000 UTC m=+0.075263043 container died 39896c1f6e86d38e085213eb6f26df39a76c76b7414d6c7dec0d0a1796b9d252 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T14:08:46.845 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:08:46 vm03 podman[93710]: 2026-03-09 14:08:46.704118872 +0000 UTC m=+0.376345657 container remove 39896c1f6e86d38e085213eb6f26df39a76c76b7414d6c7dec0d0a1796b9d252 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a, maintainer=The Prometheus Authors ) 2026-03-09T14:08:46.846 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:08:46 vm03 bash[93710]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-a 2026-03-09T14:08:46.846 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:08:46 vm03 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-09T14:08:46.846 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:08:46 vm03 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-09T14:08:46.846 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:08:46 vm03 systemd[1]: Stopped Ceph node-exporter.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:08:46.846 INFO:journalctl@ceph.node-exporter.a.vm03.stdout:Mar 09 14:08:46 vm03 systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@node-exporter.a.service: Consumed 1.074s CPU time. 2026-03-09T14:08:47.369 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm rm-cluster --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 --force --keep-logs 2026-03-09T14:08:47.503 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T14:08:48.626 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:08:48 vm04.local systemd[1]: Stopping Ceph iscsi.iscsi.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:08:48.991 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:08:48 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a[77983]: debug Shutdown received 2026-03-09T14:08:58.964 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:08:58 vm04.local bash[83786]: time="2026-03-09T14:08:58Z" level=warning msg="StopSignal SIGTERM failed to stop container ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a in 10 seconds, resorting to SIGKILL" 2026-03-09T14:08:58.965 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:08:58 vm04.local podman[83786]: 2026-03-09 14:08:58.718125946 +0000 UTC m=+10.036820225 container died af1255a6c4e865acd33d0c64288c121c3b34b291274ff5dc4fa7fd4144116a82 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default) 2026-03-09T14:08:58.965 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:08:58 vm04.local podman[83786]: 2026-03-09 14:08:58.892678175 +0000 UTC m=+10.211372454 container remove af1255a6c4e865acd33d0c64288c121c3b34b291274ff5dc4fa7fd4144116a82 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223) 2026-03-09T14:08:58.965 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:08:58 vm04.local bash[83786]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-iscsi-iscsi-a 2026-03-09T14:08:58.965 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:08:58 vm04.local systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-09T14:08:59.225 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:08:58 vm04.local systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-09T14:08:59.225 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:08:58 vm04.local systemd[1]: Stopped Ceph iscsi.iscsi.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:08:59.225 INFO:journalctl@ceph.iscsi.iscsi.a.vm04.stdout:Mar 09 14:08:58 vm04.local systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@iscsi.iscsi.a.service: Consumed 1.206s CPU time. 2026-03-09T14:08:59.925 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:08:59 vm04.local systemd[1]: Stopping Ceph grafana.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:08:59.925 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:08:59 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=server t=2026-03-09T14:08:59.66919002Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-09T14:08:59.925 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:08:59 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=tracing t=2026-03-09T14:08:59.669512472Z level=info msg="Closing tracing" 2026-03-09T14:08:59.925 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:08:59 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=ticker t=2026-03-09T14:08:59.669685376Z level=info msg=stopped last_tick=2026-03-09T14:08:50Z 2026-03-09T14:08:59.925 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:08:59 vm04.local ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a[80082]: logger=grafana-apiserver t=2026-03-09T14:08:59.669788228Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-09T14:08:59.925 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:08:59 vm04.local podman[84023]: 2026-03-09 14:08:59.681139395 +0000 UTC m=+0.025944242 container died 539aaf32dae0d4a6d217e6660dd94e9ba04765643ddfdea0f0edc9e9e7eeeaa6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a, maintainer=Grafana Labs ) 2026-03-09T14:08:59.925 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:08:59 vm04.local podman[84023]: 2026-03-09 14:08:59.698803278 +0000 UTC m=+0.043608125 container remove 539aaf32dae0d4a6d217e6660dd94e9ba04765643ddfdea0f0edc9e9e7eeeaa6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a, maintainer=Grafana Labs ) 2026-03-09T14:08:59.925 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:08:59 vm04.local bash[84023]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-grafana-a 2026-03-09T14:08:59.925 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:08:59 vm04.local systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@grafana.a.service: Deactivated successfully. 2026-03-09T14:08:59.925 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:08:59 vm04.local systemd[1]: Stopped Ceph grafana.a for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:08:59.925 INFO:journalctl@ceph.grafana.a.vm04.stdout:Mar 09 14:08:59 vm04.local systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@grafana.a.service: Consumed 3.439s CPU time. 2026-03-09T14:09:00.193 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:08:59 vm04.local systemd[1]: Stopping Ceph node-exporter.b for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4... 2026-03-09T14:09:00.193 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:08:59 vm04.local podman[84124]: 2026-03-09 14:08:59.9882221 +0000 UTC m=+0.014993365 container died 08aca6a47a5d5f219a044f84c99c683ad7481bfb33d7740c655016bb1af5cf87 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T14:09:00.193 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:09:00 vm04.local podman[84124]: 2026-03-09 14:09:00.002239928 +0000 UTC m=+0.029011193 container remove 08aca6a47a5d5f219a044f84c99c683ad7481bfb33d7740c655016bb1af5cf87 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b, maintainer=The Prometheus Authors ) 2026-03-09T14:09:00.193 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:09:00 vm04.local bash[84124]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4-node-exporter-b 2026-03-09T14:09:00.193 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:09:00 vm04.local systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-09T14:09:00.193 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:09:00 vm04.local systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-09T14:09:00.193 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:09:00 vm04.local systemd[1]: Stopped Ceph node-exporter.b for f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4. 2026-03-09T14:09:00.193 INFO:journalctl@ceph.node-exporter.b.vm04.stdout:Mar 09 14:09:00 vm04.local systemd[1]: ceph-f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4@node-exporter.b.service: Consumed 1.145s CPU time. 2026-03-09T14:09:00.638 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:09:00.667 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T14:09:00.692 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T14:09:00.693 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/495/remote/vm03/crash 2026-03-09T14:09:00.693 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/crash -- . 2026-03-09T14:09:00.731 INFO:teuthology.orchestra.run.vm03.stderr:tar: /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/crash: Cannot open: No such file or directory 2026-03-09T14:09:00.732 INFO:teuthology.orchestra.run.vm03.stderr:tar: Error is not recoverable: exiting now 2026-03-09T14:09:00.733 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/495/remote/vm04/crash 2026-03-09T14:09:00.733 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/crash -- . 2026-03-09T14:09:00.756 INFO:teuthology.orchestra.run.vm04.stderr:tar: /var/lib/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/crash: Cannot open: No such file or directory 2026-03-09T14:09:00.757 INFO:teuthology.orchestra.run.vm04.stderr:tar: Error is not recoverable: exiting now 2026-03-09T14:09:00.757 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T14:09:00.758 DEBUG:teuthology.orchestra.run.vm03:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v 'but it is still running' | egrep -v 'overall HEALTH_' | egrep -v '\(OSDMAP_FLAGS\)' | egrep -v '\(PG_' | egrep -v '\(OSD_' | egrep -v '\(OBJECT_' | egrep -v '\(POOL_APP_NOT_ENABLED\)' | head -n 1 2026-03-09T14:09:00.805 INFO:tasks.cephadm:Compressing logs... 2026-03-09T14:09:00.805 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:09:00.847 DEBUG:teuthology.orchestra.run.vm04:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:09:00.872 INFO:teuthology.orchestra.run.vm04.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T14:09:00.873 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T14:09:00.873 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-volume.log 2026-03-09T14:09:00.874 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mon.b.log 2026-03-09T14:09:00.875 INFO:teuthology.orchestra.run.vm03.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T14:09:00.875 INFO:teuthology.orchestra.run.vm03.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T14:09:00.876 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mon.a.log 2026-03-09T14:09:00.877 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.log 2026-03-09T14:09:00.879 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mon.a.log: 91.6% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T14:09:00.879 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mgr.y.log 2026-03-09T14:09:00.879 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.cephadm.log 2026-03-09T14:09:00.881 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.log: 92.6% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.log.gz 2026-03-09T14:09:00.882 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.audit.log 2026-03-09T14:09:00.882 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mon.b.log: 91.3% 95.4% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-volume.log.gz 2026-03-09T14:09:00.883 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.audit.log 2026-03-09T14:09:00.883 INFO:teuthology.orchestra.run.vm04.stderr: -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T14:09:00.884 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.cephadm.log: 80.2% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.cephadm.log.gz 2026-03-09T14:09:00.886 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.cephadm.log 2026-03-09T14:09:00.887 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.log 2026-03-09T14:09:00.889 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.audit.log: 90.6% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.audit.log.gz 2026-03-09T14:09:00.889 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mgr.x.log 2026-03-09T14:09:00.890 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.log: 86.7% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.log.gz 2026-03-09T14:09:00.890 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.4.log 2026-03-09T14:09:00.893 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.audit.log: 94.2% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.audit.log.gz 2026-03-09T14:09:00.894 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-volume.log 2026-03-09T14:09:00.894 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mgr.x.log: 90.9% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mgr.x.log.gz 2026-03-09T14:09:00.895 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.5.log 2026-03-09T14:09:00.897 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.cephadm.log: 88.6% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph.cephadm.log.gz 2026-03-09T14:09:00.899 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.4.log: gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.6.log 2026-03-09T14:09:00.904 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mon.c.log 2026-03-09T14:09:00.904 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.0.log 2026-03-09T14:09:00.916 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.7.log 2026-03-09T14:09:00.917 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mon.c.log: 95.4% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-volume.log.gz 2026-03-09T14:09:00.921 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.1.log 2026-03-09T14:09:00.925 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.2.log 2026-03-09T14:09:00.930 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/tcmu-runner.log 2026-03-09T14:09:00.932 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.3.log 2026-03-09T14:09:00.940 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.7.log: /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/tcmu-runner.log: 62.7% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/tcmu-runner.log.gz 2026-03-09T14:09:00.945 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-client.rgw.foo.a.log 2026-03-09T14:09:00.952 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.3.log: /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-client.rgw.foo.a.log: 58.5% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-client.rgw.foo.a.log.gz 2026-03-09T14:09:01.142 INFO:teuthology.orchestra.run.vm03.stderr: 89.8% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mgr.y.log.gz 2026-03-09T14:09:01.281 INFO:teuthology.orchestra.run.vm04.stderr: 91.5% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mon.b.log.gz 2026-03-09T14:09:01.486 INFO:teuthology.orchestra.run.vm03.stderr: 92.5% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mon.c.log.gz 2026-03-09T14:09:01.911 INFO:teuthology.orchestra.run.vm03.stderr: 91.6% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-mon.a.log.gz 2026-03-09T14:09:03.305 INFO:teuthology.orchestra.run.vm04.stderr: 94.6% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.6.log.gz 2026-03-09T14:09:03.330 INFO:teuthology.orchestra.run.vm04.stderr: 94.6% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.5.log.gz 2026-03-09T14:09:03.379 INFO:teuthology.orchestra.run.vm03.stderr: 94.7% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.2.log.gz 2026-03-09T14:09:03.429 INFO:teuthology.orchestra.run.vm04.stderr: 94.7% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.7.log.gz 2026-03-09T14:09:03.454 INFO:teuthology.orchestra.run.vm04.stderr: 94.8% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.4.log.gz 2026-03-09T14:09:03.455 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T14:09:03.455 INFO:teuthology.orchestra.run.vm04.stderr:real 0m2.593s 2026-03-09T14:09:03.455 INFO:teuthology.orchestra.run.vm04.stderr:user 0m4.907s 2026-03-09T14:09:03.455 INFO:teuthology.orchestra.run.vm04.stderr:sys 0m0.224s 2026-03-09T14:09:03.560 INFO:teuthology.orchestra.run.vm03.stderr: 94.8% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.0.log.gz 2026-03-09T14:09:03.678 INFO:teuthology.orchestra.run.vm03.stderr: 94.7% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.1.log.gz 2026-03-09T14:09:03.785 INFO:teuthology.orchestra.run.vm03.stderr: 94.8% -- replaced with /var/log/ceph/f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4/ceph-osd.3.log.gz 2026-03-09T14:09:03.787 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-09T14:09:03.787 INFO:teuthology.orchestra.run.vm03.stderr:real 0m2.925s 2026-03-09T14:09:03.787 INFO:teuthology.orchestra.run.vm03.stderr:user 0m5.464s 2026-03-09T14:09:03.787 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m0.260s 2026-03-09T14:09:03.787 INFO:tasks.cephadm:Archiving logs... 2026-03-09T14:09:03.787 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/495/remote/vm03/log 2026-03-09T14:09:03.787 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T14:09:04.113 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/495/remote/vm04/log 2026-03-09T14:09:04.113 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T14:09:04.349 INFO:tasks.cephadm:Removing cluster... 2026-03-09T14:09:04.349 DEBUG:teuthology.orchestra.run.vm03:> sudo cephadm rm-cluster --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 --force 2026-03-09T14:09:04.478 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T14:09:04.704 DEBUG:teuthology.orchestra.run.vm04:> sudo cephadm rm-cluster --fsid f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 --force 2026-03-09T14:09:04.831 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: f0fef664-1bbf-11f1-8c1d-8f2b7e4bd0b4 2026-03-09T14:09:05.055 INFO:tasks.cephadm:Teardown complete 2026-03-09T14:09:05.055 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T14:09:05.057 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T14:09:05.057 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T14:09:05.058 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T14:09:05.090 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-09T14:09:05.090 DEBUG:teuthology.orchestra.run.vm03:> 2026-03-09T14:09:05.090 DEBUG:teuthology.orchestra.run.vm03:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-09T14:09:05.090 DEBUG:teuthology.orchestra.run.vm03:> sudo yum -y remove $d || true 2026-03-09T14:09:05.090 DEBUG:teuthology.orchestra.run.vm03:> done 2026-03-09T14:09:05.095 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-09T14:09:05.095 DEBUG:teuthology.orchestra.run.vm04:> 2026-03-09T14:09:05.095 DEBUG:teuthology.orchestra.run.vm04:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-09T14:09:05.095 DEBUG:teuthology.orchestra.run.vm04:> sudo yum -y remove $d || true 2026-03-09T14:09:05.095 DEBUG:teuthology.orchestra.run.vm04:> done 2026-03-09T14:09:05.303 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout:Remove 2 Packages 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 39 M 2026-03-09T14:09:05.304 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T14:09:05.306 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T14:09:05.306 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout:Remove 2 Packages 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 39 M 2026-03-09T14:09:05.310 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-09T14:09:05.313 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-09T14:09:05.313 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-09T14:09:05.319 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T14:09:05.319 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T14:09:05.326 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-09T14:09:05.326 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-09T14:09:05.349 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T14:09:05.357 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-09T14:09:05.371 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:05.371 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:05.371 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T14:09:05.371 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-09T14:09:05.371 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-09T14:09:05.371 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:05.374 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:05.380 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:05.380 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:05.380 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-09T14:09:05.380 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-09T14:09:05.380 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-09T14:09:05.380 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:05.383 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:05.384 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:05.393 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:05.397 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T14:09:05.409 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T14:09:05.466 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T14:09:05.466 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:05.490 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T14:09:05.490 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:05.523 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T14:09:05.523 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:05.523 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T14:09:05.523 INFO:teuthology.orchestra.run.vm04.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-09T14:09:05.523 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:05.523 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:05.550 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-09T14:09:05.550 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:05.550 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-09T14:09:05.550 INFO:teuthology.orchestra.run.vm03.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-09T14:09:05.550 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:05.550 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:05.724 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout:Remove 4 Packages 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 212 M 2026-03-09T14:09:05.725 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T14:09:05.728 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T14:09:05.728 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T14:09:05.754 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T14:09:05.754 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout:Remove 4 Packages 2026-03-09T14:09:05.808 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:05.809 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 212 M 2026-03-09T14:09:05.809 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-09T14:09:05.811 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-09T14:09:05.811 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-09T14:09:05.816 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T14:09:05.822 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T14:09:05.824 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-09T14:09:05.827 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-09T14:09:05.833 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-09T14:09:05.833 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-09T14:09:05.842 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T14:09:05.896 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-09T14:09:05.902 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T14:09:05.904 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-09T14:09:05.908 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-09T14:09:05.922 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T14:09:05.922 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T14:09:05.922 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-09T14:09:05.922 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-09T14:09:05.925 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T14:09:05.970 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-09T14:09:05.970 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:05.970 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T14:09:05.970 INFO:teuthology.orchestra.run.vm04.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-09T14:09:05.970 INFO:teuthology.orchestra.run.vm04.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T14:09:05.970 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:05.970 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:05.996 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-09T14:09:05.996 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-09T14:09:05.996 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-09T14:09:05.996 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-09T14:09:06.042 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-09T14:09:06.042 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:06.043 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-09T14:09:06.043 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-09T14:09:06.043 INFO:teuthology.orchestra.run.vm03.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-09T14:09:06.043 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:06.043 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:06.181 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout:Remove 8 Packages 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 28 M 2026-03-09T14:09:06.182 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T14:09:06.185 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T14:09:06.185 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T14:09:06.209 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T14:09:06.209 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T14:09:06.251 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T14:09:06.256 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T14:09:06.259 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-09T14:09:06.261 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-09T14:09:06.264 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-09T14:09:06.266 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-09T14:09:06.268 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-09T14:09:06.269 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout:Remove 8 Packages 2026-03-09T14:09:06.270 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:06.271 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 28 M 2026-03-09T14:09:06.271 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-09T14:09:06.273 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-09T14:09:06.273 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-09T14:09:06.290 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T14:09:06.290 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:06.290 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T14:09:06.290 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-09T14:09:06.290 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-09T14:09:06.290 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:06.291 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T14:09:06.297 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-09T14:09:06.297 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-09T14:09:06.299 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T14:09:06.321 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T14:09:06.321 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:06.321 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T14:09:06.321 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-09T14:09:06.321 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-09T14:09:06.321 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:06.323 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T14:09:06.341 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-09T14:09:06.346 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T14:09:06.350 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-09T14:09:06.352 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-09T14:09:06.355 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-09T14:09:06.357 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-09T14:09:06.360 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-09T14:09:06.379 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T14:09:06.379 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:06.379 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-09T14:09:06.379 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-09T14:09:06.379 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-09T14:09:06.379 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:06.380 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T14:09:06.388 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-09T14:09:06.409 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T14:09:06.409 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:06.409 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-09T14:09:06.409 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-09T14:09:06.409 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-09T14:09:06.409 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:06.410 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T14:09:06.413 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T14:09:06.413 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T14:09:06.413 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-09T14:09:06.413 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-09T14:09:06.413 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-09T14:09:06.413 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-09T14:09:06.413 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-09T14:09:06.413 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout: zip-3.0-35.el9.x86_64 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:06.465 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:06.509 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-09T14:09:06.509 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-09T14:09:06.509 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-09T14:09:06.509 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-09T14:09:06.509 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-09T14:09:06.509 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-09T14:09:06.509 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-09T14:09:06.509 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout: lua-5.4.4-4.el9.x86_64 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout: unzip-6.0-59.el9.x86_64 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout: zip-3.0-35.el9.x86_64 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:06.562 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:06.683 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-09T14:09:06.689 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout:=========================================================================================== 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout:Remove 100 Packages 2026-03-09T14:09:06.690 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:06.691 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 612 M 2026-03-09T14:09:06.691 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T14:09:06.717 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T14:09:06.718 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T14:09:06.774 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:06.779 INFO:teuthology.orchestra.run.vm03.stdout:=========================================================================================== 2026-03-09T14:09:06.779 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout:=========================================================================================== 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-09T14:09:06.780 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout:=========================================================================================== 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout:Remove 100 Packages 2026-03-09T14:09:06.781 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:06.782 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 612 M 2026-03-09T14:09:06.782 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-09T14:09:06.808 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-09T14:09:06.808 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-09T14:09:06.829 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T14:09:06.829 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T14:09:06.918 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-09T14:09:06.919 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-09T14:09:06.978 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T14:09:06.978 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-09T14:09:06.986 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-09T14:09:07.006 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T14:09:07.006 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:07.006 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T14:09:07.006 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-09T14:09:07.006 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-09T14:09:07.006 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:07.007 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T14:09:07.020 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T14:09:07.045 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/100 2026-03-09T14:09:07.045 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-09T14:09:07.073 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-09T14:09:07.073 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-09T14:09:07.081 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/100 2026-03-09T14:09:07.100 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T14:09:07.100 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:07.100 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-09T14:09:07.100 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-09T14:09:07.101 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-09T14:09:07.101 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:07.101 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T14:09:07.101 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-09T14:09:07.110 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/100 2026-03-09T14:09:07.114 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T14:09:07.115 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/100 2026-03-09T14:09:07.115 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T14:09:07.128 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T14:09:07.135 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/100 2026-03-09T14:09:07.139 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/100 2026-03-09T14:09:07.139 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-09T14:09:07.139 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/100 2026-03-09T14:09:07.148 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/100 2026-03-09T14:09:07.152 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/100 2026-03-09T14:09:07.175 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T14:09:07.175 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:07.175 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T14:09:07.175 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-09T14:09:07.175 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-09T14:09:07.175 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:07.181 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T14:09:07.190 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T14:09:07.195 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/100 2026-03-09T14:09:07.205 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/100 2026-03-09T14:09:07.209 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/100 2026-03-09T14:09:07.209 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T14:09:07.209 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T14:09:07.209 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:07.209 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T14:09:07.209 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:07.218 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T14:09:07.222 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T14:09:07.228 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/100 2026-03-09T14:09:07.230 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T14:09:07.233 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/100 2026-03-09T14:09:07.233 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/100 2026-03-09T14:09:07.239 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/100 2026-03-09T14:09:07.242 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/100 2026-03-09T14:09:07.244 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/100 2026-03-09T14:09:07.246 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/100 2026-03-09T14:09:07.253 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/100 2026-03-09T14:09:07.265 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/100 2026-03-09T14:09:07.270 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T14:09:07.270 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:07.270 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-09T14:09:07.270 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-09T14:09:07.270 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-09T14:09:07.270 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:07.272 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/100 2026-03-09T14:09:07.276 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T14:09:07.282 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/100 2026-03-09T14:09:07.285 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T14:09:07.289 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/100 2026-03-09T14:09:07.301 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T14:09:07.302 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:07.302 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-09T14:09:07.302 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:07.309 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T14:09:07.320 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/100 2026-03-09T14:09:07.320 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/100 2026-03-09T14:09:07.322 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/100 2026-03-09T14:09:07.327 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/100 2026-03-09T14:09:07.328 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/100 2026-03-09T14:09:07.330 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/100 2026-03-09T14:09:07.332 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/100 2026-03-09T14:09:07.339 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/100 2026-03-09T14:09:07.341 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/100 2026-03-09T14:09:07.350 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/100 2026-03-09T14:09:07.351 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-09T14:09:07.354 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/100 2026-03-09T14:09:07.360 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-09T14:09:07.361 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/100 2026-03-09T14:09:07.371 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/100 2026-03-09T14:09:07.377 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/100 2026-03-09T14:09:07.410 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/100 2026-03-09T14:09:07.417 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/100 2026-03-09T14:09:07.419 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/100 2026-03-09T14:09:07.428 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/100 2026-03-09T14:09:07.439 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/100 2026-03-09T14:09:07.440 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-09T14:09:07.447 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/100 2026-03-09T14:09:07.459 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/100 2026-03-09T14:09:07.477 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/100 2026-03-09T14:09:07.489 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T14:09:07.489 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-09T14:09:07.490 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:07.490 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T14:09:07.516 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T14:09:07.533 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/100 2026-03-09T14:09:07.538 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/100 2026-03-09T14:09:07.541 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/100 2026-03-09T14:09:07.543 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/100 2026-03-09T14:09:07.544 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/100 2026-03-09T14:09:07.561 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/100 2026-03-09T14:09:07.565 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T14:09:07.565 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:07.565 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T14:09:07.565 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-09T14:09:07.565 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-09T14:09:07.566 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:07.567 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T14:09:07.577 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T14:09:07.577 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-09T14:09:07.577 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:07.578 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T14:09:07.580 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T14:09:07.584 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/100 2026-03-09T14:09:07.587 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/100 2026-03-09T14:09:07.589 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/100 2026-03-09T14:09:07.592 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/100 2026-03-09T14:09:07.596 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/100 2026-03-09T14:09:07.600 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/100 2026-03-09T14:09:07.604 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/100 2026-03-09T14:09:07.604 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/100 2026-03-09T14:09:07.620 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/100 2026-03-09T14:09:07.626 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/100 2026-03-09T14:09:07.628 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/100 2026-03-09T14:09:07.631 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/100 2026-03-09T14:09:07.650 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T14:09:07.650 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:07.650 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-09T14:09:07.650 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-09T14:09:07.650 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-09T14:09:07.650 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:07.652 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T14:09:07.653 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/100 2026-03-09T14:09:07.665 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/100 2026-03-09T14:09:07.666 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/100 2026-03-09T14:09:07.669 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/100 2026-03-09T14:09:07.669 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/100 2026-03-09T14:09:07.671 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/100 2026-03-09T14:09:07.674 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/100 2026-03-09T14:09:07.674 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/100 2026-03-09T14:09:07.676 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/100 2026-03-09T14:09:07.677 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/100 2026-03-09T14:09:07.680 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/100 2026-03-09T14:09:07.681 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/100 2026-03-09T14:09:07.683 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/100 2026-03-09T14:09:07.684 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/100 2026-03-09T14:09:07.689 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/100 2026-03-09T14:09:07.706 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T14:09:07.706 INFO:teuthology.orchestra.run.vm04.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:07.706 INFO:teuthology.orchestra.run.vm04.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T14:09:07.706 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:07.707 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T14:09:07.716 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T14:09:07.717 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/100 2026-03-09T14:09:07.719 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/100 2026-03-09T14:09:07.722 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/100 2026-03-09T14:09:07.724 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/100 2026-03-09T14:09:07.727 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/100 2026-03-09T14:09:07.729 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/100 2026-03-09T14:09:07.732 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 57/100 2026-03-09T14:09:07.735 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/100 2026-03-09T14:09:07.740 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 58/100 2026-03-09T14:09:07.746 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 59/100 2026-03-09T14:09:07.747 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/100 2026-03-09T14:09:07.748 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 60/100 2026-03-09T14:09:07.750 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/100 2026-03-09T14:09:07.750 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 61/100 2026-03-09T14:09:07.753 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 62/100 2026-03-09T14:09:07.755 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/100 2026-03-09T14:09:07.757 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/100 2026-03-09T14:09:07.758 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 63/100 2026-03-09T14:09:07.760 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/100 2026-03-09T14:09:07.762 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 64/100 2026-03-09T14:09:07.763 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/100 2026-03-09T14:09:07.767 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-09T14:09:07.771 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 66/100 2026-03-09T14:09:07.777 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 67/100 2026-03-09T14:09:07.780 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 68/100 2026-03-09T14:09:07.782 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 69/100 2026-03-09T14:09:07.784 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T14:09:07.784 INFO:teuthology.orchestra.run.vm03.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-09T14:09:07.784 INFO:teuthology.orchestra.run.vm03.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-09T14:09:07.784 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:07.784 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T14:09:07.788 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 70/100 2026-03-09T14:09:07.791 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 71/100 2026-03-09T14:09:07.792 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/100 2026-03-09T14:09:07.793 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/100 2026-03-09T14:09:07.795 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 72/100 2026-03-09T14:09:07.795 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/100 2026-03-09T14:09:07.799 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/100 2026-03-09T14:09:07.804 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 73/100 2026-03-09T14:09:07.805 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/100 2026-03-09T14:09:07.806 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/100 2026-03-09T14:09:07.809 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/100 2026-03-09T14:09:07.809 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 74/100 2026-03-09T14:09:07.812 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 57/100 2026-03-09T14:09:07.812 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 75/100 2026-03-09T14:09:07.815 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 76/100 2026-03-09T14:09:07.817 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 77/100 2026-03-09T14:09:07.820 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 58/100 2026-03-09T14:09:07.823 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 78/100 2026-03-09T14:09:07.824 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 59/100 2026-03-09T14:09:07.826 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 60/100 2026-03-09T14:09:07.826 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 79/100 2026-03-09T14:09:07.829 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 61/100 2026-03-09T14:09:07.832 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 62/100 2026-03-09T14:09:07.837 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 63/100 2026-03-09T14:09:07.842 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 64/100 2026-03-09T14:09:07.845 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T14:09:07.845 INFO:teuthology.orchestra.run.vm04.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-09T14:09:07.845 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:07.847 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-09T14:09:07.852 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 66/100 2026-03-09T14:09:07.853 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T14:09:07.857 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 67/100 2026-03-09T14:09:07.860 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 68/100 2026-03-09T14:09:07.863 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 69/100 2026-03-09T14:09:07.868 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 70/100 2026-03-09T14:09:07.872 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 71/100 2026-03-09T14:09:07.876 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 72/100 2026-03-09T14:09:07.880 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T14:09:07.880 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-09T14:09:07.884 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 73/100 2026-03-09T14:09:07.889 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 74/100 2026-03-09T14:09:07.893 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 75/100 2026-03-09T14:09:07.893 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-09T14:09:07.895 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 76/100 2026-03-09T14:09:07.897 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 77/100 2026-03-09T14:09:07.898 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 82/100 2026-03-09T14:09:07.901 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 83/100 2026-03-09T14:09:07.903 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 78/100 2026-03-09T14:09:07.903 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 84/100 2026-03-09T14:09:07.904 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-09T14:09:07.906 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 79/100 2026-03-09T14:09:07.927 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T14:09:07.927 INFO:teuthology.orchestra.run.vm03.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-09T14:09:07.927 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:07.934 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T14:09:07.963 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 80/100 2026-03-09T14:09:07.963 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-09T14:09:07.976 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 81/100 2026-03-09T14:09:07.981 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 82/100 2026-03-09T14:09:07.984 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 83/100 2026-03-09T14:09:07.986 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 84/100 2026-03-09T14:09:07.986 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-09T14:09:13.591 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-09T14:09:13.591 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /sys 2026-03-09T14:09:13.591 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /proc 2026-03-09T14:09:13.591 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /mnt 2026-03-09T14:09:13.591 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /var/tmp 2026-03-09T14:09:13.591 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /home 2026-03-09T14:09:13.591 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /root 2026-03-09T14:09:13.591 INFO:teuthology.orchestra.run.vm04.stdout:skipping the directory /tmp 2026-03-09T14:09:13.591 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:13.600 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 86/100 2026-03-09T14:09:13.614 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 85/100 2026-03-09T14:09:13.614 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /sys 2026-03-09T14:09:13.614 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /proc 2026-03-09T14:09:13.614 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /mnt 2026-03-09T14:09:13.614 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /var/tmp 2026-03-09T14:09:13.614 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /home 2026-03-09T14:09:13.614 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /root 2026-03-09T14:09:13.615 INFO:teuthology.orchestra.run.vm03.stdout:skipping the directory /tmp 2026-03-09T14:09:13.615 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:13.618 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T14:09:13.618 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T14:09:13.625 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T14:09:13.625 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 86/100 2026-03-09T14:09:13.628 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 88/100 2026-03-09T14:09:13.630 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 89/100 2026-03-09T14:09:13.633 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 90/100 2026-03-09T14:09:13.636 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 91/100 2026-03-09T14:09:13.636 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-09T14:09:13.643 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T14:09:13.643 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T14:09:13.652 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-09T14:09:13.653 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 87/100 2026-03-09T14:09:13.654 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 93/100 2026-03-09T14:09:13.656 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 88/100 2026-03-09T14:09:13.656 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 94/100 2026-03-09T14:09:13.659 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 89/100 2026-03-09T14:09:13.659 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 95/100 2026-03-09T14:09:13.661 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 90/100 2026-03-09T14:09:13.662 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 96/100 2026-03-09T14:09:13.664 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 91/100 2026-03-09T14:09:13.664 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-09T14:09:13.668 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 97/100 2026-03-09T14:09:13.676 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 98/100 2026-03-09T14:09:13.678 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 92/100 2026-03-09T14:09:13.681 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 93/100 2026-03-09T14:09:13.681 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 99/100 2026-03-09T14:09:13.681 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T14:09:13.683 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 94/100 2026-03-09T14:09:13.686 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 95/100 2026-03-09T14:09:13.689 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 96/100 2026-03-09T14:09:13.695 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 97/100 2026-03-09T14:09:13.702 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 98/100 2026-03-09T14:09:13.707 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 99/100 2026-03-09T14:09:13.707 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T14:09:13.782 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/100 2026-03-09T14:09:13.783 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 73/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ply-3.11-14.el9.noarch 74/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 75/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 76/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 78/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 79/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 80/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 81/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 82/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 83/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 84/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 85/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 86/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 87/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 88/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 89/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 90/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 91/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 92/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 93/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 94/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 95/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 96/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 97/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 98/100 2026-03-09T14:09:13.784 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 99/100 2026-03-09T14:09:13.813 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T14:09:13.813 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/100 2026-03-09T14:09:13.813 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/100 2026-03-09T14:09:13.813 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/100 2026-03-09T14:09:13.813 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/100 2026-03-09T14:09:13.813 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/100 2026-03-09T14:09:13.813 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/100 2026-03-09T14:09:13.813 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/100 2026-03-09T14:09:13.813 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/100 2026-03-09T14:09:13.814 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/100 2026-03-09T14:09:13.817 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 73/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ply-3.11-14.el9.noarch 74/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 75/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 76/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 78/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 79/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 80/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 81/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 82/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 83/100 2026-03-09T14:09:13.818 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 84/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 85/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 86/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 87/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 88/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 89/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 90/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 91/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 92/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 93/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 94/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 95/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 96/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 97/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 98/100 2026-03-09T14:09:13.819 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 99/100 2026-03-09T14:09:13.865 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T14:09:13.865 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T14:09:13.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:13.867 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 100/100 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-09T14:09:13.898 INFO:teuthology.orchestra.run.vm03.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-09T14:09:13.899 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-ply-3.11-14.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-09T14:09:13.900 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-09T14:09:13.901 INFO:teuthology.orchestra.run.vm03.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-09T14:09:13.901 INFO:teuthology.orchestra.run.vm03.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-09T14:09:13.901 INFO:teuthology.orchestra.run.vm03.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:13.901 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:13.901 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout:Remove 1 Package 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 775 k 2026-03-09T14:09:14.095 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T14:09:14.097 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T14:09:14.097 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T14:09:14.098 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T14:09:14.099 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T14:09:14.115 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:14.115 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout:Remove 1 Package 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 775 k 2026-03-09T14:09:14.116 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-09T14:09:14.118 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-09T14:09:14.118 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-09T14:09:14.120 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-09T14:09:14.120 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-09T14:09:14.137 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-09T14:09:14.138 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T14:09:14.223 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T14:09:14.249 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T14:09:14.262 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T14:09:14.262 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:14.262 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T14:09:14.262 INFO:teuthology.orchestra.run.vm04.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:14.262 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:14.262 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:14.331 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-09T14:09:14.331 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:14.331 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-09T14:09:14.331 INFO:teuthology.orchestra.run.vm03.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-09T14:09:14.331 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:14.331 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:14.439 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-immutable-object-cache 2026-03-09T14:09:14.439 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:14.442 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:14.443 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:14.443 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:14.567 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-immutable-object-cache 2026-03-09T14:09:14.568 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:14.571 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:14.571 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:14.571 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:14.615 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr 2026-03-09T14:09:14.616 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:14.619 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:14.619 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:14.619 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:14.738 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr 2026-03-09T14:09:14.738 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:14.741 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:14.742 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:14.742 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:14.797 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-dashboard 2026-03-09T14:09:14.797 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:14.800 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:14.801 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:14.801 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:14.916 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-dashboard 2026-03-09T14:09:14.916 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:14.920 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:14.920 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:14.920 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:14.960 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-09T14:09:14.960 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:14.963 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:14.964 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:14.964 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:15.089 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-09T14:09:15.089 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:15.092 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:15.092 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:15.093 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:15.127 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-rook 2026-03-09T14:09:15.127 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:15.130 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:15.131 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:15.131 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:15.251 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-rook 2026-03-09T14:09:15.251 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:15.254 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:15.255 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:15.255 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:15.284 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-mgr-cephadm 2026-03-09T14:09:15.284 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:15.288 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:15.288 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:15.288 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:15.411 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-mgr-cephadm 2026-03-09T14:09:15.411 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:15.414 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:15.415 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:15.415 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:15.453 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:15.453 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:15.453 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T14:09:15.453 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:15.453 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T14:09:15.453 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-09T14:09:15.453 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:15.453 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T14:09:15.453 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:15.453 INFO:teuthology.orchestra.run.vm04.stdout:Remove 1 Package 2026-03-09T14:09:15.453 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:15.454 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 3.6 M 2026-03-09T14:09:15.454 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T14:09:15.455 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T14:09:15.455 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T14:09:15.464 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T14:09:15.464 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T14:09:15.487 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T14:09:15.501 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T14:09:15.562 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout:Remove 1 Package 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 3.6 M 2026-03-09T14:09:15.585 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-09T14:09:15.587 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-09T14:09:15.587 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-09T14:09:15.597 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-09T14:09:15.597 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-09T14:09:15.603 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T14:09:15.603 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:15.603 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T14:09:15.603 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:15.604 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:15.604 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:15.622 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-09T14:09:15.636 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T14:09:15.696 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T14:09:15.733 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-09T14:09:15.733 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:15.733 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-09T14:09:15.733 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:15.733 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:15.733 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:15.766 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: ceph-volume 2026-03-09T14:09:15.766 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:15.769 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:15.770 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:15.770 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:15.895 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: ceph-volume 2026-03-09T14:09:15.896 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:15.898 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:15.899 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:15.899 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:15.936 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repo Size 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout:Remove 2 Packages 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 610 k 2026-03-09T14:09:15.937 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T14:09:15.939 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T14:09:15.939 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T14:09:15.948 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T14:09:15.948 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T14:09:15.972 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T14:09:15.974 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:15.986 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T14:09:16.042 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T14:09:16.042 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:16.069 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repo Size 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout:Remove 2 Packages 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 610 k 2026-03-09T14:09:16.070 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-09T14:09:16.072 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-09T14:09:16.072 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-09T14:09:16.083 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-09T14:09:16.083 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-09T14:09:16.090 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T14:09:16.090 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:16.090 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T14:09:16.090 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:16.090 INFO:teuthology.orchestra.run.vm04.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:16.090 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:16.090 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:16.109 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-09T14:09:16.112 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:16.125 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T14:09:16.192 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T14:09:16.192 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-09T14:09:16.241 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-09T14:09:16.241 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:16.241 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-09T14:09:16.241 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:16.241 INFO:teuthology.orchestra.run.vm03.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:16.241 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:16.241 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:16.280 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repo Size 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout:Remove 3 Packages 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 3.7 M 2026-03-09T14:09:16.281 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T14:09:16.283 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T14:09:16.283 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T14:09:16.298 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T14:09:16.299 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T14:09:16.328 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T14:09:16.331 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T14:09:16.332 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T14:09:16.332 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T14:09:16.392 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T14:09:16.392 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T14:09:16.392 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T14:09:16.429 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T14:09:16.429 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:16.429 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T14:09:16.429 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:16.429 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:16.429 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:16.429 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:16.429 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:16.431 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:16.431 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:16.431 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repo Size 2026-03-09T14:09:16.431 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:16.431 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-09T14:09:16.431 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-09T14:09:16.431 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-09T14:09:16.431 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-09T14:09:16.431 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-09T14:09:16.431 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-09T14:09:16.431 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:16.432 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-09T14:09:16.432 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:16.432 INFO:teuthology.orchestra.run.vm03.stdout:Remove 3 Packages 2026-03-09T14:09:16.432 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:16.432 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 3.7 M 2026-03-09T14:09:16.432 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-09T14:09:16.433 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-09T14:09:16.433 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-09T14:09:16.450 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-09T14:09:16.450 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-09T14:09:16.484 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-09T14:09:16.487 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T14:09:16.488 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T14:09:16.488 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T14:09:16.548 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T14:09:16.548 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-09T14:09:16.548 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-09T14:09:16.586 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-09T14:09:16.586 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:16.586 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-09T14:09:16.586 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:16.586 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:16.586 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:16.586 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:16.586 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:16.599 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: libcephfs-devel 2026-03-09T14:09:16.599 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:16.602 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:16.603 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:16.603 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:16.764 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: libcephfs-devel 2026-03-09T14:09:16.764 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:16.767 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:16.768 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:16.768 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:16.791 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: Package Arch Version Repository Size 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout:Removing: 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout:Removing dependent packages: 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout:Removing unused dependencies: 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout:Transaction Summary 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout:================================================================================ 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout:Remove 20 Packages 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout:Freed space: 79 M 2026-03-09T14:09:16.793 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction check 2026-03-09T14:09:16.797 INFO:teuthology.orchestra.run.vm04.stdout:Transaction check succeeded. 2026-03-09T14:09:16.797 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction test 2026-03-09T14:09:16.818 INFO:teuthology.orchestra.run.vm04.stdout:Transaction test succeeded. 2026-03-09T14:09:16.818 INFO:teuthology.orchestra.run.vm04.stdout:Running transaction 2026-03-09T14:09:16.858 INFO:teuthology.orchestra.run.vm04.stdout: Preparing : 1/1 2026-03-09T14:09:16.860 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-09T14:09:16.862 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-09T14:09:16.865 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-09T14:09:16.866 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T14:09:16.878 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T14:09:16.880 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-09T14:09:16.882 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-09T14:09:16.884 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T14:09:16.885 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-09T14:09:16.888 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-09T14:09:16.888 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T14:09:16.902 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T14:09:16.902 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T14:09:16.902 INFO:teuthology.orchestra.run.vm04.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-09T14:09:16.903 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:16.917 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T14:09:16.919 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-09T14:09:16.923 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-09T14:09:16.927 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-09T14:09:16.930 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-09T14:09:16.934 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-09T14:09:16.936 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-09T14:09:16.937 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-09T14:09:16.940 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-09T14:09:16.943 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:16.944 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: Package Arch Version Repository Size 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout:Removing: 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout:Removing dependent packages: 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout:Removing unused dependencies: 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout:Remove 20 Packages 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout:Freed space: 79 M 2026-03-09T14:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-09T14:09:16.949 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-09T14:09:16.949 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-09T14:09:16.953 INFO:teuthology.orchestra.run.vm04.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T14:09:16.970 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-09T14:09:16.970 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-09T14:09:17.011 INFO:teuthology.orchestra.run.vm04.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T14:09:17.011 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-09T14:09:17.011 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-09T14:09:17.012 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-09T14:09:17.015 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-09T14:09:17.017 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-09T14:09:17.020 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-09T14:09:17.020 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T14:09:17.034 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-09T14:09:17.037 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-09T14:09:17.039 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-09T14:09:17.041 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T14:09:17.043 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-09T14:09:17.045 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-09T14:09:17.046 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T14:09:17.057 INFO:teuthology.orchestra.run.vm04.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-09T14:09:17.057 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout:Removed: 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T14:09:17.058 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:17.059 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T14:09:17.060 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T14:09:17.060 INFO:teuthology.orchestra.run.vm03.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-09T14:09:17.060 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:17.074 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-09T14:09:17.078 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-09T14:09:17.083 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-09T14:09:17.087 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-09T14:09:17.090 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-09T14:09:17.092 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-09T14:09:17.095 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-09T14:09:17.097 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-09T14:09:17.099 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-09T14:09:17.114 INFO:teuthology.orchestra.run.vm03.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T14:09:17.179 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-09T14:09:17.179 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-09T14:09:17.179 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-09T14:09:17.179 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-09T14:09:17.179 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-09T14:09:17.179 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-09T14:09:17.179 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-09T14:09:17.179 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-09T14:09:17.180 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-09T14:09:17.224 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-09T14:09:17.224 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:17.224 INFO:teuthology.orchestra.run.vm03.stdout:Removed: 2026-03-09T14:09:17.224 INFO:teuthology.orchestra.run.vm03.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-09T14:09:17.224 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-09T14:09:17.224 INFO:teuthology.orchestra.run.vm03.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-09T14:09:17.224 INFO:teuthology.orchestra.run.vm03.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-09T14:09:17.224 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-09T14:09:17.224 INFO:teuthology.orchestra.run.vm03.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-09T14:09:17.224 INFO:teuthology.orchestra.run.vm03.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: re2-1:20211101-20.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T14:09:17.225 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:17.284 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: librbd1 2026-03-09T14:09:17.284 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:17.287 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:17.288 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:17.288 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:17.438 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: librbd1 2026-03-09T14:09:17.439 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:17.441 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:17.442 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:17.442 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:17.480 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rados 2026-03-09T14:09:17.480 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:17.483 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:17.483 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:17.483 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:17.632 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rados 2026-03-09T14:09:17.632 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:17.634 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:17.635 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:17.635 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:17.666 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rgw 2026-03-09T14:09:17.666 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:17.669 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:17.669 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:17.669 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:17.804 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rgw 2026-03-09T14:09:17.804 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:17.806 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:17.807 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:17.807 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:17.840 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-cephfs 2026-03-09T14:09:17.840 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:17.842 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:17.843 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:17.843 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:17.970 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-cephfs 2026-03-09T14:09:17.970 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:17.972 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:17.973 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:17.973 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:18.006 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: python3-rbd 2026-03-09T14:09:18.006 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:18.009 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:18.009 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:18.009 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:18.140 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: python3-rbd 2026-03-09T14:09:18.141 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:18.143 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:18.143 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:18.143 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:18.178 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-fuse 2026-03-09T14:09:18.179 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:18.181 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:18.181 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:18.181 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:18.330 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-fuse 2026-03-09T14:09:18.330 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:18.334 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:18.334 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:18.334 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:18.360 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-mirror 2026-03-09T14:09:18.360 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:18.362 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:18.362 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:18.362 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:18.509 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-mirror 2026-03-09T14:09:18.509 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:18.511 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:18.512 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:18.512 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:18.537 INFO:teuthology.orchestra.run.vm04.stdout:No match for argument: rbd-nbd 2026-03-09T14:09:18.537 INFO:teuthology.orchestra.run.vm04.stderr:No packages marked for removal. 2026-03-09T14:09:18.540 INFO:teuthology.orchestra.run.vm04.stdout:Dependencies resolved. 2026-03-09T14:09:18.540 INFO:teuthology.orchestra.run.vm04.stdout:Nothing to do. 2026-03-09T14:09:18.540 INFO:teuthology.orchestra.run.vm04.stdout:Complete! 2026-03-09T14:09:18.563 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean all 2026-03-09T14:09:18.690 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: rbd-nbd 2026-03-09T14:09:18.690 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-09T14:09:18.692 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-09T14:09:18.693 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-09T14:09:18.693 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-09T14:09:18.699 INFO:teuthology.orchestra.run.vm04.stdout:56 files removed 2026-03-09T14:09:18.715 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean all 2026-03-09T14:09:18.725 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T14:09:18.749 DEBUG:teuthology.orchestra.run.vm04:> sudo yum clean expire-cache 2026-03-09T14:09:18.844 INFO:teuthology.orchestra.run.vm03.stdout:56 files removed 2026-03-09T14:09:18.865 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T14:09:18.889 DEBUG:teuthology.orchestra.run.vm03:> sudo yum clean expire-cache 2026-03-09T14:09:18.904 INFO:teuthology.orchestra.run.vm04.stdout:Cache was expired 2026-03-09T14:09:18.904 INFO:teuthology.orchestra.run.vm04.stdout:0 files removed 2026-03-09T14:09:18.927 DEBUG:teuthology.parallel:result is None 2026-03-09T14:09:19.044 INFO:teuthology.orchestra.run.vm03.stdout:Cache was expired 2026-03-09T14:09:19.044 INFO:teuthology.orchestra.run.vm03.stdout:0 files removed 2026-03-09T14:09:19.066 DEBUG:teuthology.parallel:result is None 2026-03-09T14:09:19.066 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm03.local 2026-03-09T14:09:19.066 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm04.local 2026-03-09T14:09:19.066 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T14:09:19.067 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-09T14:09:19.091 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-09T14:09:19.094 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-09T14:09:19.158 DEBUG:teuthology.parallel:result is None 2026-03-09T14:09:19.162 DEBUG:teuthology.parallel:result is None 2026-03-09T14:09:19.162 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T14:09:19.164 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T14:09:19.164 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:09:19.200 DEBUG:teuthology.orchestra.run.vm04:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T14:09:19.214 INFO:teuthology.orchestra.run.vm03.stderr:bash: line 1: ntpq: command not found 2026-03-09T14:09:19.218 INFO:teuthology.orchestra.run.vm04.stderr:bash: line 1: ntpq: command not found 2026-03-09T14:09:19.279 INFO:teuthology.orchestra.run.vm03.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T14:09:19.279 INFO:teuthology.orchestra.run.vm03.stdout:=============================================================================== 2026-03-09T14:09:19.279 INFO:teuthology.orchestra.run.vm03.stdout:^+ 104-167-24-26.lunoxia.fc> 2 8 377 59 +768us[ +805us] +/- 44ms 2026-03-09T14:09:19.279 INFO:teuthology.orchestra.run.vm03.stdout:^+ red-pelican-63749.zap.cl> 2 6 377 55 -2778us[-2741us] +/- 21ms 2026-03-09T14:09:19.279 INFO:teuthology.orchestra.run.vm03.stdout:^* ntp2.wup-de.hosts.301-mo> 2 7 377 54 -2557us[-2521us] +/- 20ms 2026-03-09T14:09:19.280 INFO:teuthology.orchestra.run.vm03.stdout:^+ sv5.ggsrv.de 2 6 377 54 +5500us[+5500us] +/- 23ms 2026-03-09T14:09:19.280 INFO:teuthology.orchestra.run.vm04.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-09T14:09:19.280 INFO:teuthology.orchestra.run.vm04.stdout:=============================================================================== 2026-03-09T14:09:19.280 INFO:teuthology.orchestra.run.vm04.stdout:^+ red-pelican-63749.zap.cl> 2 6 377 52 -2981us[-2981us] +/- 21ms 2026-03-09T14:09:19.280 INFO:teuthology.orchestra.run.vm04.stdout:^* ntp2.wup-de.hosts.301-mo> 2 7 377 183 -2531us[-2528us] +/- 20ms 2026-03-09T14:09:19.280 INFO:teuthology.orchestra.run.vm04.stdout:^+ sv5.ggsrv.de 2 7 377 55 +5478us[+5478us] +/- 23ms 2026-03-09T14:09:19.280 INFO:teuthology.orchestra.run.vm04.stdout:^+ 104-167-24-26.lunoxia.fc> 2 8 377 52 +794us[ +794us] +/- 44ms 2026-03-09T14:09:19.280 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T14:09:19.282 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T14:09:19.283 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T14:09:19.285 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T14:09:19.286 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T14:09:19.288 INFO:teuthology.task.internal:Duration was 1789.767360 seconds 2026-03-09T14:09:19.288 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T14:09:19.290 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T14:09:19.290 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T14:09:19.321 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T14:09:19.358 INFO:teuthology.orchestra.run.vm04.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T14:09:19.360 INFO:teuthology.orchestra.run.vm03.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-09T14:09:19.765 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T14:09:19.765 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-03-09T14:09:19.765 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T14:09:19.789 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm04.local 2026-03-09T14:09:19.789 DEBUG:teuthology.orchestra.run.vm04:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T14:09:19.829 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T14:09:19.829 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:09:19.831 DEBUG:teuthology.orchestra.run.vm04:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:09:20.300 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T14:09:20.300 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:09:20.302 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T14:09:20.324 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:09:20.324 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:09:20.324 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T14:09:20.324 INFO:teuthology.orchestra.run.vm03.stderr: --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:09:20.324 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T14:09:20.328 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T14:09:20.328 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T14:09:20.328 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose --/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T14:09:20.328 INFO:teuthology.orchestra.run.vm04.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T14:09:20.328 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T14:09:20.464 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T14:09:20.474 INFO:teuthology.orchestra.run.vm03.stderr: 97.9% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T14:09:20.476 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T14:09:20.478 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T14:09:20.478 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T14:09:20.540 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T14:09:20.563 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T14:09:20.565 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:09:20.581 DEBUG:teuthology.orchestra.run.vm04:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:09:20.602 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-03-09T14:09:20.628 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = core 2026-03-09T14:09:20.642 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:09:20.671 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:09:20.671 DEBUG:teuthology.orchestra.run.vm04:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T14:09:20.696 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T14:09:20.696 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T14:09:20.699 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T14:09:20.700 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/495/remote/vm03 2026-03-09T14:09:20.700 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T14:09:20.739 DEBUG:teuthology.misc:Transferring archived files from vm04:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/495/remote/vm04 2026-03-09T14:09:20.739 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T14:09:20.766 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T14:09:20.767 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T14:09:20.781 DEBUG:teuthology.orchestra.run.vm04:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T14:09:20.821 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T14:09:20.824 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T14:09:20.824 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T14:09:20.826 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T14:09:20.826 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T14:09:20.836 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T14:09:20.850 INFO:teuthology.orchestra.run.vm03.stdout: 8532147 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 9 14:09 /home/ubuntu/cephtest 2026-03-09T14:09:20.877 INFO:teuthology.orchestra.run.vm04.stdout: 8532122 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 9 14:09 /home/ubuntu/cephtest 2026-03-09T14:09:20.878 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T14:09:20.886 INFO:teuthology.run:Summary data: description: orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} duration: 1789.7673599720001 flavor: default owner: kyr success: true 2026-03-09T14:09:20.886 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T14:09:20.905 INFO:teuthology.run:pass